« Olympus's Teaser Campaign | Main | Colorless Green Photographs »

Friday, 22 May 2009


Feed You can follow this conversation by subscribing to the comment feed for this post.

"...and the one below came back with an appalling 6.2"

Don't feel too bad. This (which was a favorite of mine from last weekend's hike in the Wild Basin of Rocky Mountain National Park) came back as a 1.3.

A Macbeth chart is almost 8 times more aesthetically pleasing? Yikes!

I played around with this yesterday and found that borders are key to high scores. Apparently nothing says aesthetic quality like the design sense of a motivational poster.

Inline with what I imagine will be many people's results, mine were a smattering to and fro.

However, I noticed when entering a certain black and white photo, my score was 89.3. A color version of the same photo went down to 39.5. Harrumph.

Great tip, but have you read the bold fine print on the ACQUINE site; "A rule of thumb is that if the aesthetic quality of a photo is obvious to most people, it may not be worthwhile to seek Acquine's opinion on it because Acquine may choose to assign funny scores in such cases. Please be serious if you would like Acquine to help you"
So it would seem we're all not being serious enough ;-)

Now I'm off to rate my PICs...

The results for Chris May's (very nice) photo proves that Black and White is aesthetically doomed.

"However, I noticed when entering a certain black and white photo, my score was 89.3. A color version of the same photo went down to 39.5."

Maybe there's something to this after all....


Now you know why people frame their prints.

..and you seriously believe that's genuine software - some cackling performance artist hasn't hired a team of monkeys, or wired up a jellyfish or something?

I don't know.... I looked at the "top rated of all time" I don't trust it. Seems to be just like Flickr, a bunch of HDR and cat photos. Sometimes even Infrared HDR cat photos.

My scores for 5 pictures varied from 3.7 for this, to 97.7 for this. Which just proves I am a photographer takes can produce the near perfect picture and absolute rubbish, no news to me.

Hey, at least it can be bothered to look at your images and pass an objective (by its standards) comment. That makes it more valuable than 95% of online commenters.

It looks to me as though it really likes pictures that have a wide tonal range. Look at their top pictures of all time, and you'll quickly notice that they all seem to have large areas of full black. I suspect that adding a generous very dark area is why the frame boosts the score of your photographs.

I also noticed that somebody had loaded a bunch of Ansel Adams photos. "Moon and Half Dome" apparently rates a 99.5

I believe it was Degas that said "The frame is the artist's reward" and apparently the software agrees.

I love richardplondon's take on this with reference to the "team of monkeys or wired up jelleyfish". I can visualize the setting of a low rent apartment with a bunch of chimps in front of a Mac classic monitor looking at images. I'm sure a framed jungle scene would do well and zoo photos, well.........or the jellyfish tank by the monitor all aglow with a nice sea scape and the light goes dim with a shot of the desert. So the program is called ACQUINE? You sure that's not EQUINE................Ass?

I'd just like to introduce the notion that maybe we should be striving for LOWER scores, on the assumption that anything considered laudable by a computer program is by definition a cliche.


You're all missing what's really going here. You see, what they're saying is that artistic merit is to a large extent arbitrary, or we'd have more agreement on standards and value. So, a software-analyzed photo is as arbitrary as any other, no better, no worse. If they can corner the market on computerized esthetic analysis, or at least be the first on the block, well, it could be the next big thing. Imagine doing away with all those expensive critics and gallery curators, art history professors, web forum contributors, etc., so that the whole world comes to your software for validation.

That's just the beginning. The next step is to develop image-taking software that matches that analyzer's value system, and then presto, you have a robot camera that will generate the world's best pictures and all photo buyers will have a central web location to buy those world's best pictures, the analyzer says so. You can fire all those people on staff who research and buy your images for you.

It's all over but the crying.

I guess I'll think of this next time I read about an outrage perpetrated by a performance artist with NEA funding. My GOD, can you believe the way these people are throwing around taxpayer dollars from the NSF?!? It's APPALLING!

It reminds me of Painting by Numbers: Komar and Melamid's Scientific Guide to Art

> ... maybe we should be striving
> for LOWER scores...

A subjectively good photo that gets a low rating by a cliche-based software?
That implies it works!


On a lark, I fed it the cover photo from the DOMAI website. Result: 7.3.

I will refrain from making any comment on ACQUINE's assessment relative to the Macbeth chart.

"Infrared HDR Cat photo" made me laugh. Of course then I googled it. Actually, the result _was_ pretty interesting... http://www.flickr.com/photos/platypusscooter/3023788754/

I went one step further and submitted two simple single-color backgrounds, a white and a black. Would you believe it? White scores higher than black. The light wins over darkness. It's very Tolkien-like.

The score? white: 41.7% - black: 24.6%.

I think the software needs serious tweaking.

Old news Ctein - editors have been Alpha testing this since 1826!

Mike it's supposedly tougher than the human audience because they, again supposedly, strive against grade inflation.

But it has some silly biases for a site that's supposedly (:-)) about colour photography. I uploaded a colour photo with a thin white border. 64. The same photo with a thin black border, 77. And then this one, 98.4. And I'll admit this is a pretty cliched photo, aint' it?

I think that my personal sense of aesthetics must differ from that of the people who designed the program.

I'm also a bit sceptical about the consistency of the algorithm, given that after a while I started to see if I could game it by selecting pictures that were compositional matches for the high scoring ones. Nope.

Still, it provoked me into discovering that Zenfolio provides URLs for multiple sizes of my images, which is useful information!

Richard Plondon asks if "some cackling performance artist hasn't hired a team of monkeys, or wired up a jellyfish or something".

I suggest that some computer scientist just hired a performance artist to sit at a terminal and evaluate photographs.

The comments about it being a "team of monkeys, or a wired up jellyfish or something" have missed something important: a team of monkeys, at least, would probably do far better than this, which clearly just has no correlation at all with anything interesting. Monkeys are no fools, and many of them have remarkably good taste.

I'm not so sure about jellyfish, but I wouldn't be surprised if they do better too.

HCB was in fact a bad photographer, this one got a 2,7/100
I always thought he was overrated, now I'm sure...
The sadest thing is that some people are working hard on this, it must be really tough to program.

According to this thing I am a better photographer than HCB

I knew that dude was over rated.

I had my own ups and down with this program on Wednesday. Here are my results and what happened afterwards:


Could someone point me to the infrared HDR cat photo group on Flickr? I can't find it and it seems as if I could find some kindred spirts there.

"HCB was in fact a bad photographer"

Actually, it's not that bad. I enlarged the photo to 600x406 (they say it's better if the photos are at least 600px per side), added 25px of black border on each side and voila: 68.3.

Ctein has apparently been using the wrong color chart. The Kodak Q-60 gets a 57.3. Obviously making it 5.73 times better than the MacBeth.


For heaven´s sake!!!
You made me lose my whole afternoon!!!
This is the very best pastime ever!!!
And the sense of humour it shares, and the laughs it causes!
God, I´ve been stuck to my screen the whole afteroon, awestruck!!!
Can´t wait to show them to me lads!

I think we should all pour ourselves a cup/glass of our favorite brew (beer/ coffee/ whatever), flop down in puffy chair, and breath a sigh of relief. Our lives are already dominated by overly "thinking" machines. Yes, yes, I admit it is nice to have improved technology, and in some cases it can save lives, etc. But if and when someone creates a machine that can mimic or replicate the more complex facets of the human psyche, the jump to a HAL is but a short one.

Actually this reminds me of logistician who claimed he could analyze Zen texts to decode the experience of spiritual enlightenment. Well, the paper certainly evoked a response: one page in and the mind grows numb.

As Mike intimated, I'd prefer to leave the clichéd operations to the computers, and the creative innovation and interpretation of values to us fallible humans.


In a somewhat similar vein, check out what kind of funny paintings a poll will produce here: http://www.diacenter.org/km/homepage.html

Hint: LOTS of blue

Interesting that using a different size picture from Flickr got different results on the one I tried. Most of the B/W pictures got into the 90s and the highest rated one was 98.2 and not one I would think could get in the door. The color versions rated much lower...But heck..it was a good diversion this evening.

Mike T.

Here's a single line of shell script which delivers as-good or better results than this program.

wget -q -O- $IMAGEURL|cksum |awk '{ print $1 % 100}'

Give me a break. I tested it this morning.

It seemed to be all over the place with some of my B&W stuff---some good, some not. Sometimes it even rated the same photo in color higher than the B&W version and visa versa. Unbelievable!

More importantly, it could not consistently determine the superiority of photos taken with more modern, higher resolution dSLRs and some taken with an old Olympus ($110) P&S. Some of the photos from the P&S scored much higher than photos from a D40, D70, or even a D300. I suppose when I scan and upload a few from film, it will misjudge those too.

And what's wrong with infrared cat photos?

You all realize, don't you, that this will be the hot digicam feature of 2011? "Art Recognition" mode will only take the shot when the "aesthetic quality" rating of the scene is 90 or above, as indicated in real time by a bar gauge on the LCD. Who knows what will result, but it will be worth it for the contortions it evokes from earnest snappers everywhere. Barring too many mishaps from shooters backing into traffic or tripping over cacti, the next generation of the feature may let one specify styles: "Eggleston Recognition mode", "Vermeer Recognition mode", etc. A couple generations on and the camera will talk you through it. Mark my words.

I tried half a dozen images that have won picture of the day competitions, and the scores ranged from 3.7 to 78.6. Amazing!
Ken from the UK

My night shot of a 1977 BMW R100RS motorcycle (http://inlinethumb34.webshots.com/43809/2661199120028267445S600x600Q85.jpg score=99.8) outscored an Ansel Adams photo who's link I submitted. I feel better now although I am pretty sure that Adams will continue to out sell me. Considering that I've never sold anything that should not be difficult. Interestingly, some of my personal favorite photos of my own scored poorly while I see some obvious composition flaws in the BMW photo.

"HCB was in fact a bad photographer, this one got a 2,7/100 "

Looks like the deleteme pool was right after all.

I *LIKE* that cat photo!

pax / Ctein

After the national and international nightmares of the last decade, playing with this joke of a program for 10 minutes has almost completely restored my faith in humanity.

Has anyone checked to see that the site isn't just a random generator?


I don't think it's random, since it gives different copies of the same picture with different names from different URLs, roughly the same scores (I'm willing to concede a point or two to account for jpg compression differences - it seems to prefer higher-quality compression).

So if it's not random, and it's not a performance art piece, not chimps or jellyfish, if in fact someone is making a serious attempt to predict aesthetic appeal solely through computational methods ... well I just find that a really depressing waste of time.

And I'll be even more depressed if they eventually succeed in actually predicting the popularity of an image, though good luck finding an accurate way of quantifying that in the real world.

Meanwhile, plugging one's work into this tool feels a little like submitting one's prose to Microsoft Word for grammatical approval: as I said, just really depressing.

Was pleased to get a 91.5 on one of my photos, until I noticed that a totally black image scored higher. Also noticed that when I clicked on the "more from this user" icon under my photo, the site displayed a dozen photos I'd never seen before.

That's not as funny, as it seems. Almost identical program is judging if I (or my colleagues) can take a mortage loan for apartment.

Reminds me of a comment made by Sir Humphry Davy, English scientist and inventor of the Davy safety lamp.

He was asked his opinion after viewing an art exhibition in Paris and commented: "The finest collection of frames I ever saw".

This program is clearly rubbish. Whilst this isn't the best picture in the world, ACQUINE rated it at 4.7 - poor girl...

Dear Mark,

I don't assign numerical ratings, not being a computer, but the composition on that photo does seriously suck.

In general, I'm a bit surprised at how resistant people are to the very idea that a computer might be able to perform some of this sort of evaluation.

Do people really think that aesthetics is so obviously all contextual and spiritual that it has NO substantial psychophysical component? More to the point, are they so convinced this is true that they think the experiment is inherently unworthy of the effort?

pax / Ctein

This photo got 0.8. A new low record?

Ctein - thanks for the critical analysis of my photo. In my defence, I was crushed in a crowd with the floats going past, simultaneously making sure my kids didn't run out into the road. Composition wasn't really an option... :-)

I think this sort of project could succeed, but only with a lot of training. I suspect something could be done with user feedback (think hotornot.com) and a neural net, perhaps working on a blurred or otherwise reduced version of the image to pick out the composition and colour balance (evaluated separately?) rather than the fine details. My Nikon camera allegedly has a 30,000 image database for the matrix metering - this sounds like a similar project to that, but with aesthetic evaluation as an output.

Dear Mark T,

I'm glad you took the critical analysis in the spirit in which it was made [ VBG ].

I also intentionally fed the program a couple of photographs of mine that I thought were pretty badly composed to see what it would do, and it largely agreed.

I think the project is actually being run the way you suggest. It's been around for a while, and if they were only interested in doing internal analyses, they wouldn't open it up to the entire world to submit photos. It's not about beta-testing a product, like some people think; it's about testing a model to destruction so you can learn why it fails.

In one sense, it is working pretty much like I would expect. It's not producing a lot of false positives (highly-rated photographs with sucky compositions), which means the most basic "aesthetic engine" has a fair understanding of good rules of design and composition. What causes the problems are the exceptional cases, the ones that produce false negatives, like my third figure and the photograph Ole just mentioned.

That's where the research gets interesting. Do those two photos fail the computer analysis because their appeal is contextual (that is, it's about what we're looking at, instead of how it's arranged) or is it that the aesthetic model isn't sophisticated enough yet?

I have my suspicions, but I honestly don't know the answer. Nor does anyone else reading this column. Which is what makes running experiments like this an inherently interesting and valuable endeavor.

Unless, that is, one is the sort who thinks the aesthetics should remain some ineffable, undefinable, human construct. That doesn't describe me, but it does describe some people.

~ pax \ Ctein
[ Please excuse any word-salad. MacSpeech in training! ]
-- Ctein's Online Gallery http://ctein.com
-- Digital Restorations http://photo-repair.com

Since I have worked in that field for twenty years (before retiring to become a full-time photographer), I can state without doubt that the researchers who authored this software have excellent credentials, and have presented the research in extremely credible peer-reviewed venues.

It is possible that individual results appear erratic, however the whole approach includes sound ideas deserving of more examination than the dismissive and derogatory judgements used in some of the comments.

Well human rating systems are not so great either. Or are they?

You guys have to take a look at this:

The comments to this entry are closed.



Blog powered by Typepad
Member since 06/2007