Results 1 to 2 of 2

Thread: [Python] Neopets OCR

  1. #1

    Joined
    Sep 2012
    Posts
    39
    Userbars
    0
    Thanks
    0
    Thanked
    31/15
    Mentioned
    11 times
    Time Online
    2d 8h 38m
    Avg. Time Online
    N/A

    [Python] Neopets OCR

    Greetings,

    Figured I would share the small OCR cracker I made in Python. It uses Python Imaging Library (PIL) for processing the image. It converts the image to greyscale and determines the darkest pixel with getexrema() then attempts to outline the pet. It then uses getbbox() to create a virtual rectangle (left, top, right, bottom) around the focal point of the pet. Finally, the x and y coords I use are the center of the rectangle (the most human point).

    Code:
    import Image
    
    im = Image.open("capt.jpg")
    im = im.convert("L")
    lo, hi = im.getextrema()
    im = im.point(lambda p: p == lo)
    rect = im.getbbox()
    x = 0.5 * (rect[0] + rect[2])
    y = 0.5 * (rect[1], rect[3])
    So far it's been 100% accurate and the points that is has chosen have been very human-like. For those interested this code will be integrated into Neolib with the next commit.

  2. The Following 6 Users Say Thank You to AliasXNeo For This Useful Post:

    DarkByte (03-12-2013),I_royalty_I (10-20-2012),j03 (10-20-2012),malky (02-23-2016),runbikesurf (10-20-2012),Vegeta (10-20-2012)

  3. #2
    mistry121's Avatar
    Joined
    Dec 2011
    Posts
    118
    Userbars
    1
    Thanks
    18
    Thanked
    32/20
    DL/UL
    84/0
    Mentioned
    17 times
    Time Online
    1d 13h 5m
    Avg. Time Online
    N/A
    wow I learned how to do this in my programming engineering class, never figured it would actually be useful LOL

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •