I can't really decide on the best way to treat the different channels (red, green, blue) in a color image when doing edge detection. Is it better to just average the magnitude of the gradient in each channel? Treat the channels separately and find red edges, green edges and blue edges? My pragmatic (if ad hoc) choice is to just add them all up, and use a cludge to keep from getting negative gray-scale values.
Here's the Python for gradient-based edges. The magnitude of each channel's gradient is subtracted from 255. Since gray-scale values range from 0 to 255 we use a
max()
function to set any resulting negative values to 0. The edges detected using this scheme are shown below, on the right. inpix = img.load()
s = 1.0;
# calculate the gradient in all three channels, sum of squares for mag
for y in xrange(2,h-1):
for x in xrange(2,w-1):
rdx = ( inpix[x+1,y][0] - inpix[x-1,y][0] )/2
gdx = ( inpix[x+1,y][1] - inpix[x-1,y][1] )/2
bdx = ( inpix[x+1,y][2] - inpix[x-1,y][2] )/2
rdy = ( inpix[x,y+1][0] - inpix[x,y-1][0] )/2
gdy = ( inpix[x,y+1][1] - inpix[x,y-1][1] )/2
bdy = ( inpix[x,y+1][2] - inpix[x,y-1][2] )/2
outpix[x,y] = max( 0, 255 - s*( math.sqrt( rdx**2 + rdy**2 ) + math.sqrt( gdx**2 + gdy**2 ) + math.sqrt( bdx**2 + bdy**2 ) ) )
outimg.save(sys.argv[2],"JPEG")
Taking a careful look at the code, you may notice the parameter
s
floating around in there. This lets you "darken" the edges by choosing something larger than 1. Here's the same image with s=3.0
(on the left):Like in the previous image processing posts, these examples were made using the Python Imaging Library for image input/output and easy access to the pixels.
No comments:
Post a Comment