Is Photofeeler really free?

Yep. By giving your opinion (voting) on other people's photos, you earn feedback on your own.

As a timesaver and to get more feedback faster, there's also the option to buy Credits instead of voting.

Question answered? Sign up now!

Who can see my photos?

When you start a test on a photo, other logged-in Photofeeler users (within your selected voter demographic) can see that photo on the voting page in order to give their feedback.

When the test is ended, the photo becomes entirely private again.

Question answered? Sign up now!

Will photos I upload here show up in Google search?

No. Your photos can only be seen by other logged-in users while you're running a test.

Question answered? Sign up now!

Will my photos ever be used or displayed without my consent?

No. Every photo Photofeeler publishes for demo or marketing purposes is with explicit permission.

Photofeeler does conduct research internally. This can mean a variety of things: having internal employees or contractors tag your photos for your gender, publicly sharing numbers or statistics of which your data may be a part, things of that nature. Never anything personal.

Question answered? Sign up now!

How does Karma work?

You earn Karma by giving your opinion (voting) on other users' photos.

Your Karma level can be low, medium or high. Every time you submit a vote, you'll notice a progress bar appears on the vote button. This bar shows you how close you are to reaching the next Karma level.

The more Karma you earn, the more votes you can expect to receive on your own Karma test.

As your test collects votes your Karma level gets used up, but you can vote again to to raise it. Alternatively, you also have the option to buy Credits instead of voting. Photo tests using Credits also receive votes faster.

Are there any guidelines for voting correctly?

    1. Always be honest— it's the best favor you can do a fellow user.

    2. Rate based on how you feel about the person, not the quality of the photo. (The latter is best addressed in the Notes box.)

    3. Ignore logos and watermarks. Most commonly: LinkedIn logos from profile photo imports; photographers' watermarks present when users are choosing images to purchase.

    4. Don't assume that the person in the photo and the user who uploaded it are always the same.

How did you choose the 3 traits users vote on in each photo category?

In choosing Photofeeler's default traits, the team asked loads of people what they think and feel about photos on LinkedIn, Facebook, OkCupid, Tinder, etc. The preferences were then narrowed down to foundational touchpoints.

A great deal of thought went into choosing these default traits, but that's not to say they won't ever change.

Why don't you just compare two different photos ("pick A or B")?

There's a lot of reasons why Photofeeler uses trait-based testing rather than asking voters to choose their favorite photo.

  1. Testing photos one-at-a-time means you can find the top few photos in a large set without having to check numerous combinations and solve a logic puzzle.
  2. Any photo can be compared to any other from the past or in the future (by sorting), to other users (with Ranks), and to an absolute scale (with Scores).
  3. Showing multiple photos at once introduces bias (gives the voter more info than they would actually have when viewing each photo individually).
  4. By testing for traits, you get multidimensional results, so you can make contextual decisions. For instance, a photo that scores high in Likable but low in Competent might do well with voters but poorly with recruiters. Similarly, a photo might look Attractive but not Trustworthy: making it hard to close a date.
  5. This system also makes it possible to algorithmically characterize voting styles and account for them, and to detect and eliminate random voting, both of which are critically important to giving statistically accurate results.

All that said, this system is much more complex to build and run. (A "pick A or B" system can, for instance, collect 3 clicks and declare Photo B the winner, even in cases where those 3 clicks were from people who always click on Photo B.)

The Photofeeler team has always believed in doing photo testing the right way — not the easy way.

What if the same picture has come up again?

The short answer is: it's not possible to vote again on the same test.

One factor at play is that Photofeeler intentionally spaces out photos of the same person. So by the time a voter sees a picture of someone they've seen before, the details have become fuzzy.

In most reported cases of seeing the same picture, the photo in question is just slightly different than the original. Alternatively, it is possible — though less common — that a user started a brand new test using a photo they've tested previously.

In any case, please continue to vote respectfully.

How do I interpret my results in Photofeeler Ranks mode?

For Business and Social tests, Photofeeler Ranks are a comparison between your photo's score and all the rest that have been tested on the Photofeeler platform.

For Dating tests, Photofeeler Ranks compare your photo to other photos with subjects your same gender and age (e.g. Males, Age 32).

Photofeeler Ranks are given as a percentile. So, for instance, a Rank of 58% means your photo did better than 58% of photos.

Photofeeler Ranks

Photofeeler's algorithms were written to take into account the number of votes received when calculating Ranks. For example, it is much rarer to get an average Competent score of +2.50 with 10 votes (97th percentile) than it is to get an average Competent score of +2.50 with only two votes (81st percentile). A small sample of votes is more likely to score unusually high or low due to sampling error.

In Ranks mode, what are the faded sections on the bars?

If you hover or tap on the section, a tooltip will tell you your current Rank's confidence intervals.

Score Notches

A confidence interval is a mathematical term. It means the range in which Photofeeler is pretty certain your "true" Rank lies. (That is, the Rank your photo would end up with if you had thousands of votes.)

The more votes you add to your test, the smaller these ranges get. Note that photo tests with very split opinions will have wider confidence intervals and require more votes to obtain the same certainty.

In Scores mode, what are the dark-colored notches in the bars?

If you hover over any individual notch, a tooltip will tell you precisely what it means.

Score Notches

Generally, though, these notches represent the voting style of the person behind the vote and how that vote was adjusted in calculating your Photofeeler Rank.

For instance, if someone voted 3 ("very"), but they consistently rate photos very high for that trait, you'll see a notch on the far left of the bar. That means their "very" doesn't count as much as the "very" of a person who rarely leaves 3s for that trait.

How do my photo results compare to everyone else's?

By default, your photos' results are shown by Photofeeler Rank (a percentile).

For Business and Social tests, if you score 70%, it means your photo scored higher than 70% of photos in the Photofeeler database.

For Dating photo tests, if you score 70% in Trustworthy, it means your photo scored higher in trustworthiness than 70% of photos with subjects your same gender and age.

What scores should I be shooting for exactly?

A rank of 50% is average, so you'll want to get at or above that level.

The Photofeeler team adamantly believes that anyone can hit the 90th percentile (a 90% Photofeeler Rank or higher) for any trait with enough experimentation.

Does the Attractive Rank correlate with a rating of 1-10?

Dating users are often discouraged by a photo's low Attractive % due to common misconceptions specific to measures of attractiveness. Some truths to counteract the myths:

    1. Photofeeler Attractive %s have nothing to do with a 1-10 scale

    Many people assume a score of "20%" relates to a "2 out of 10" on a 1-10 scale, but these systems aren't actually related at all. A percentile of 20% means that your photo did better than 20% of photos with subjects your same gender and age. It isn't necessarily bad — it just hasn't beat out 80% of the competition!

    Keep in mind that it can be tough competition on Photofeeler — especially when users test many photos with better and better results.

    2. Different photos of the same person get very different results

    It's foolish to test one photo and then assume that your result is prescriptive of how you're perceived in real life. Every photo tells a different story; no one photo will ever show you "as you are" enough to get a "true" rating. So remember: Photofeeler tests photos, not people!

I got the same exact Note several times. Are these repeats from the same person?

No. These are Quick Notes, and they're available to voters on the voting page, just above the big, orange "Submit Vote" button.

Before rolling out Quick Notes, a lot of the same comments were typed out over and over. Quick Notes help voters give specific feedback more quickly.

Sometimes several users will send the same Quick Note, resulting in what looks like repeats. These Notes were actually sent from different people.

Why have real people voting on photos? Why can't an algorithm just judge my photos instead?

Photofeeler's own co-founder/CTO has a PhD in Optimization Algorithms and experience writing artificial intelligence for Fortune 500 companies. The fact is, the way we interpret each other's faces is one of the most complex mental processes, and the field is a ways away from bottling all of the nuance involved.

What Photofeeler does with algorithms and machine learning, however, is monitor vote quality, detect all manners of voter fraud in real time, and use sophisticated score distribution analysis — accounting for factors like individual voter styles — to optimize the accuracy of test results. The consequence is statistical accuracy far beyond what a small number of votes could normally provide.

So get reliable results based on real people's feedback now, and who knows what AI the Photofeeler team may cook up later. ;)

Isn't the voting system easily gamed (making my results worthless)?

Nope. Thanks to sophisticated artificial intelligence, bad votes are detected and thrown out in real time.

As soon as some variation of voter fraud is committed (for instance, a user exhibits careless voting behavior), Photofeeler starts throwing out these opinions so they never reach the photo owner.

If low quality voting persists, the voter receives a warning and is told that their votes will receive a decreasing amount of Karma as long as they continue to be unusable.

Voters who continue to give low quality votes have become banned from Photofeeler altogether.

This is bad news for someone who wants to game their way to feedback on their photos. The good news is, since activating these particular algorithms, low vote quality is basically nonexistent.