Date: 02/12/06 (Algorithms) Keywords: no keywords i'm working on something vaguely inspired by grimson & stauffer's use of multiple gaussians to model the appearance at a pixel ("the" background subtraction paper in the computer vision literature). the basic algorithm is this: that's where the similarity between my stuff and grimson & stauffer's ends. what i see in low variance regions is precisely what one would expect — only one active model with fairly small (co)variance. however, in high variance regions, it does something unexpected — instead of creating multiple models with uniformly distributed means and moderate (co)variances, it creates very few models with really high variance. what frustrates me is that if i didn't clip the (co)variances, it would degenerate to a single model with such huge (co)variance that it's essentially a uniform distribution. any thoughts on how to fix this? i suspect that what i want to do is pull the matching model slightly closer to the input point, then apply a single round of "charged particle" type thing where each pixel model tries to move a little bit away from all the others, then adjust the (co)variances to minimize overlap — sound reasonable? Source: http://community.livejournal.com/algorithms/72967.html
|