It those damn fancy computers! And software! AAGGHH! Those things can be so dang confusing.
Solomon says that no human individual paired the pic with the story, that a technological foul-up was to blame, and that the paper is tweaking its photo selection software to make sure this doesn't happen again.
"The theme engine, through automation, grabbed a photo it thought was relevant, and attached it to the story," Solomon says, acknowledging that the photo had gone up without a person seeing it. "There was no editorial decision to run it. As soon as it was brought to our attention, we pulled it down."
Solomon also conceded that the automated system the paper has in place to pick photos doesn't have a tight enough screening process, and said steps were being taken to fix that.
"We regret that the technology has let us down in this case," he said, "and are working to make sure that the [photo] matches are more relevant in the future."
So apparently it was a computer software program, something probably commonly used within the industry to cut costs by eliminating the need for human labor, that deserves blame in this matter. Normally we'd be more than willing to accept this explanation without much hesitation, but considering that the newspaper committing the gaffe is the most conservative daily broadsheet in the country, it does cause one to question its legitimacy, sadly, if only for a moment. Perhaps some of you Gawker readers have some insight into this sort of photo software program used by newspapers. Please feel free to tell us if we need to call bullshit on the Times' explantion.