Yep. By giving your opinion (voting) on other people's photos, you earn feedback on your own.
As a timesaver and to get more feedback faster, there's also the option to buy Credits instead of voting.
Question answered? Go back to sign up now!
When you start a test on a photo, other logged-in Photofeeler users (within your selected voter demographic) can see that photo on the voting page in order to give their feedback.
When the test is ended, the photo becomes entirely private again.
Question answered? Go back to sign up now!
No. Your photos can only be seen by other logged-in users while you're running a test.
Question answered? Go back to sign up now!
No. Every photo Photofeeler publishes for demo or marketing purposes is with explicit permission.
Photofeeler does conduct research internally. This can mean a variety of things: having internal employees or contractors tag your photos for your gender, publicly sharing numbers or statistics of which your data may be a part, things of that nature. Never anything personal.
Question answered? Go back to sign up now!
You earn Karma by giving your opinion (voting) on other users' photos.
Your Karma level can be low, medium or high. Every time you submit a vote, you'll notice a progress bar appears on the vote button. This bar shows you how close you are to reaching the next Karma level.
The more Karma you earn, the more votes you can expect to receive on your own Karma test.
As your test collects votes your Karma level gets used up, but you can vote again to to raise it. Alternatively, you also have the option to buy Credits instead of voting. Photo tests using Credits also receive votes faster.
1. Always be honest— it's the best favor you can do a fellow user.
2. Rate based on how you feel about the person, not the quality of the photo. (The latter is best addressed in the Notes box.)
3. Ignore logos and watermarks. Most commonly: LinkedIn logos from profile photo imports; photographers' watermarks present when users are choosing images to purchase.
4. Don't assume that the person in the photo and the user who uploaded it are always the same.
In choosing Photofeeler's default traits, the team asked loads of people what they think and feel about photos on LinkedIn, Facebook, OkCupid, Tinder, etc. The preferences were then narrowed down to foundational touchpoints.
A great deal of thought went into choosing these default traits, but that's not to say they won't ever change.
There's a lot of reasons why Photofeeler uses trait-based testing rather than asking voters to choose their favorite photo.
All that said, this system is much more complex to build and run. (A "pick A or B" system can, for instance, collect 3 clicks and declare Photo B the winner, even in cases where those 3 clicks were from people who always click on Photo B.)
The Photofeeler team has always believed in doing photo testing the right way — not the easy way.
The short answer is: it's not possible to vote again on the same test.
One factor at play is that Photofeeler intentionally spaces out photos of the same person. So by the time a voter sees a picture of someone they've seen before, the details have become fuzzy.
In most reported cases of seeing the same picture, the photo in question is just slightly different than the original. Alternatively, it is possible — though less common — that a user started a brand new test using a photo they've tested previously.
In any case, please continue to vote respectfully.
Photofeeler Ranks are a comparison between your photo's score and all the rest that have been tested on the Photofeeler platform.
Photofeeler Ranks are given as a percentile. So, for instance, a Rank of 58% means your photo did better than 58% of photos.

Photofeeler's algorithms were written to take into account the number of votes received when calculating Ranks. For example, it is much rarer to get an average Competent score of +2.50 with 10 votes (97th percentile) than it is to get an average Competent score of +2.50 with only two votes (81st percentile). A small sample of votes is more likely to score unusually high or low due to sampling error.
If you hover or tap on the section, a tooltip will tell you your current Rank's confidence intervals.

A confidence interval is a mathematical term. It means the range in which Photofeeler is pretty certain your "true" Rank lies. (That is, the Rank your photo would end up with if you had thousands of votes.)
The more votes you add to your test, the smaller these ranges get. Note that photo tests with very split opinions will have wider confidence intervals and require more votes to obtain the same certainty.
If you hover over any individual notch, a tooltip will tell you precisely what it means.

Generally, though, these notches represent the voting style of the person behind the vote and how that vote was adjusted in calculating your Photofeeler Rank.
For instance, if someone voted 3 ("very"), but they consistently rate photos very high for that trait, you'll see a notch on the far left of the bar. That means their "very" doesn't count as much as the "very" of a person who rarely leaves 3s for that trait.
By default, your photos' results are shown by Photofeeler Rank (a percentile).
If you score 70% in Likable, for instance, it means your photo scored higher in likability than 70% of photos in the Photofeeler database.
A rank of 50% is average, so you'll want to get at or above that level.
The Photofeeler team adamantly believes that anyone can hit the 90th percentile (a 90% Photofeeler Rank or higher) for any trait with enough experimentation.
Dating users are often discouraged by a photo's low Attractive % due to common misconceptions specific to measures of attractiveness. Some truths to counteract the myths:
1. Photofeeler Attractive %s have nothing to do with a 1-10 scale
Many people assume a score of "20%" relates to a "2 out of 10" on a 1-10 scale, but these systems aren't actually related at all. Like all other traits on Photofeeler, Attractive Ranks are given as a percentile. A percentile of 20% means that your photo did better than 20% of tested photos in the database. It isn't necessarily bad — it just hasn't beat out 80% of the competition!
Keep in mind that it can be tough competition on Photofeeler — especially when users test many photos with better and better results.
2. Different photos of the same person get very different results
It's foolish to test one photo and then assume that your result is prescriptive of how you're perceived in real life. Every photo tells a different story; no one photo will ever show you "as you are" enough to get a "true" rating. So remember: Photofeeler tests photos, not people!
No. These are Quick Notes, and they're available to voters on the voting page, just above the big, orange "Submit Vote" button.
Before rolling out Quick Notes, a lot of the same comments were typed out over and over. Quick Notes help voters give specific feedback more quickly.
Sometimes several users will send the same Quick Note, resulting in what looks like repeats. These Notes were actually sent from different people.
Photofeeler's own co-founder/CTO has a PhD in Optimization Algorithms and experience writing artificial intelligence for Fortune 500 companies. The fact is, the way we interpret each other's faces is one of the most complex mental processes, and the field is a ways away from bottling all of the nuance involved.
What Photofeeler does with algorithms and machine learning, however, is monitor vote quality, detect all manners of voter fraud in real time, and use sophisticated score distribution analysis — accounting for factors like individual voter styles — to optimize the accuracy of test results. The consequence is statistical accuracy far beyond what a small number of votes could normally provide.
So get reliable results based on real people's feedback now, and who knows what AI the Photofeeler team may cook up later. ;)
Nope. Thanks to sophisticated artificial intelligence, bad votes are detected and thrown out in real time.
As soon as some variation of voter fraud is committed (for instance, a user exhibits careless voting behavior), Photofeeler starts throwing out these opinions so they never reach the photo owner.
If low quality voting persists, the voter receives a warning and is told that their votes will receive a decreasing amount of Karma as long as they continue to be unusable.
Voters who continue to give low quality votes have become banned from Photofeeler altogether.
This is bad news for someone who wants to game their way to feedback on their photos. The good news is, since activating these particular algorithms, low vote quality is basically nonexistent.