Is PhotoFeeler really free?

Yep. By giving your opinion (voting) on other people's photos, you earn feedback on your own.

As a timesaver and to get more feedback faster, there's also the option to buy credits instead of voting.

Question answered? Go back to sign up now!


Who can see my photos?

When you start a test on a photo, other logged-in PhotoFeeler users (within your selected voter demographic) can see that photo on the Voting page in order to give their feedback. When the test is ended, the photo becomes entirely private again.

Question answered? Go back to sign up now!

Will photos I upload here show up in Google search?

No. Your photos can only be seen by other logged-in users while you're running a test.

Question answered? Go back to sign up now!

Will my photos ever be used or displayed without my consent?

No, not in the way you are afraid of. Every photo we use for marketing or demo purposes is with explicit permission. We never want to overstep boundaries or embarrass anyone.

PhotoFeeler is, however, a research project in addition to a photo testing tool, so we will be using your photo behind the scenes in our research efforts. This can mean a variety of things: having us (or people who work with us) tag your photos for your gender, publicly sharing numbers or statistics of which your data may be a part, things of that nature. Never anything personal.

Question answered? Go back to sign up now!


How does Karma work?

You earn Karma by giving your opinion (voting) on other users' photos.

Your Karma level can be Low, Medium or High. Every time you submit a vote, you'll notice a progress bar appears on the vote button. This bar shows you how close you are to reaching the next Karma level.

The more Karma you earn, the more votes you can expect to receive on your own Karma test.

As your test collects votes your Karma level gets used up, but you can vote again to to raise it. Alternatively, you also have the option to buy credits instead of voting. Photo tests using credits also receive votes faster.


Are there any guidelines for voting correctly?

1. Always be honest— it's the best favor you can do a fellow user.

2. Rate based on how you feel about the person, not the quality of the photo. (The latter is best addressed in the notes box.)

3. Ignore logos and watermarks. Most commonly: LinkedIn logos from profile photo imports; photographers' watermarks present when users are choosing images to purchase.

4. Don't assume that the person in the photo and the user who uploaded it are always the same.


How did you choose the 3 traits users vote on in each photo category?

In choosing our default traits, we asked loads of people what they think and feel about photos on LinkedIn, Facebook, OKCupid, etc. The preferences we collected were then narrowed down to foundational touchpoints.

We put a great deal of thought into our default traits, but that's not to say they won't ever change.


Why don't you just compare two different photos ("pick A or B")?

There's a lot of reasons why we use trait-based testing rather than asking voters to choose their favorite photo.

  1. Testing photos one-at-a-time means you can find the top few photos in a large set without having to check numerous combinations and solve a logic puzzle.
  2. Any photo can be compared to any other from the past or in the future (by sorting), to other users (with Ranks), and to an absolute scale (with Scores).
  3. Showing multiple photos at once introduces bias (gives the voter more info than they would actually have when viewing each photo individually).
  4. By testing for traits, you get multidimensional results, so you can make contextual decisions. For instance, a photo that scores high in Likable but low in Competent might do well with voters but poorly with recruiters. Similarly, a photo might be Attractive but not Trustworthy: making it hard to close a date.
  5. Our system also makes it possible to algorithmically characterize voting styles and account for them, and to detect and eliminate random voting, both of which are critically important to giving statistically accurate results.

All that said, a system like ours is much more complex to build and run. (A "pick A or B" system can, for instance, collect 3 clicks and declare Photo B the winner, even in cases where those 3 clicks were from people who always click on Photo B.) But we at PhotoFeeler have always believed that it was worth it to do photo testing right.


What if the same picture has come up again?

The short answer is: it's not possible to vote again on the same test.

One factor at play here is that our system intentionally spaces out photos of the same person so that, by the time a voter sees a picture of someone they've seen before, the details have become fuzzy.

In most reported cases of seeing the same picture, we've found that the photo in question was just a little bit different than the original. Maybe it's cropped differently, it features a slightly different expression, or the clothes are changed. Alternatively, it is possible -- though less common -- that a user started a brand new test using a photo they've tested previously.

In any case, please continue to vote respectfully.


How do I interpret my results in PhotoFeeler Ranks mode?

PhotoFeeler Ranks are a comparison between your photo's score and all the rest that have been tested on our site.

PhotoFeeler Ranks are given as a percentile. So, for instance, a rank of 58% means your photo did better than 58% of photos.

PhotoFeeler Ranks

We take special care to take into account the number of votes received when calculating ranks. For example, it is much rarer to get an average competent score of +2.50 with 10 votes (97th percentile) than it is to get an average competent score of +2.50 with only two votes (81st percentile). A small sample of votes is more likely to score unusually high or low due to sampling error.


In Ranks mode, what are the faded sections on the bars?

If you hover or tap on the section, a tooltip will tell you your current Rank's confidence intervals.

Score Notches

A confidence interval is a mathematical term. It means the range in which we're pretty certain your "true" Rank lies. (That is, the Rank your photo would end up with if you had thousands of votes.)

The more votes you add to your test, the smaller these ranges get. Note that photo tests with very split opinions will have wider confidence intervals and require more votes to obtain the same certainty.


In Scores mode, what are the dark-colored notches in the bars?

If you hover over any individual notch , a tooltip will tell you precisely what it means.

Score Notches

Generally, though, these notches represent the voting style of the person behind the vote and how that vote was weighted in your PhotoFeeler Rank.

For instance, if someone voted 3 ("Very"), but they consistently rate photos very high for that trait, you'll see a notch on the far left of the bar. That means their "Very" doesn't count as much as the "Very" of a person who rarely leaves 3s for that trait.


How do my photo results compare to everyone else's?

By default, your photos' results will be shown by PhotoFeeler Rank (a percentile).

If you score 70% in Likability, for instance, it means your photo scored higher in Likability than 70% of photos in our database.


What scores should I be shooting for exactly?

A rank of 50% is average, so you'll want to get at or above that level.

Knowing what we do about this, though, we believe that anyone can hit the 90th percentile (a 90% PhotoFeeler Rank or higher) in all categories with enough experimentation.


Does the "Attractive" Rank correlate with a rating of 1-10?

We've noticed that Dating users can get very discouraged by a photo's low Attractive % due to common misconceptions specific to measures of Attractiveness. Allow us to clear the air with some truths:

1. Our Attractive %s have nothing to do with a 1-10 scale

Many people assume a score of "20%" relates to a "2 out of 10" on a 1-10 scale, but these systems aren't actually related at all. Like all other traits on our site, Attractive Ranks are given as a percentile. A percentile of 20% means that your photo did better than 20% of tested photos in our database. It isn't necessarily bad— it just hasn't beat out 80% of the competition!

Keep in mind that it can be tough competition around here— especially when users test many photos with better and better results.

2. Different photos of the same person get very different results

It's foolish to test one photo and then assume that your result is prescriptive of how you're perceived in real life. Every photo tells a different story; no one photo will ever show you "as you are" enough to get a "true" rating. So remember: PhotoFeeler tests photos, not people!


I got the same exact note several times. Are these repeats from the same person?

No. Sometimes several users will send the same Quick Note, resulting in what looks like repeats. These notes were actually sent from different people.


Why have real people voting on photos? Why can't an algorithm just judge my photos instead?

PhotoFeeler's own co-founder/CTO has a PhD in Optimization Algorithms and experience writing artificial intelligence for Fortune 500 companies. The fact is, the way we interpret each other's faces is one of the most complex mental processes, and the field is a ways away from bottling all of the nuance involved.

What PhotoFeeler does with algorithms and machine learning, however, is monitor vote quality, detect all manners of voter fraud in real time, and use sophisticated score distribution analysis, accounting for factors like individual voter styles, to optimize the accuracy of test results. The consequence is statistical accuracy far beyond what a small number of votes could normally provide.

So get reliable results based on real people's feedback now, and who knows what AI our own team may cook up later. ;)


Isn't the voting system easily gamed (making my results worthless)?

Nope. Voting on PhotoFeeler is virtually impossible to game thanks to sophisticated artificial intelligence that detects all manners of voter fraud in real time.

While voters who have either received warnings or been banned from our system have many theories for why or how this happened, the truth is our system is much more complex than any of these theories have yet accounted for.

The good news is, since activating these particular algorithms, low vote quality is basically nonexistent.