Yep. By giving your opinion (voting) on other people's photos, you earn feedback on your own.
As a timesaver and to get more feedback faster, there's also the option to buy credits instead of voting.
When you start a test on a photo, other logged-in Photofeeler users (within your selected voter demographic) can see that photo on the Voting page in order to give their feedback. When the test is ended, the photo becomes entirely private again.
No. Your photos can only be seen by other logged-in users while you're running a test.
No, not in the way you are afraid of. Every photo we use for marketing or demo purposes is with explicit permission. We never want to overstep boundaries or embarrass anyone.
Photofeeler is, however, a research project in addition to a photo testing tool, so we will be using your photo behind the scenes in our research efforts. This can mean a variety of things: having us (or people who work with us) tag your photos for your gender, publicly sharing numbers or statistics of which your data may be a part, things of that nature. Never anything personal.
You earn Karma by giving your opinion (voting) on other users' photos.
Your Karma level can be Low, Medium or High. Every time you submit a vote, you'll notice a progress bar appears on the vote button. This bar shows you how close you are to reaching the next Karma level.
The more Karma you earn, the more votes you can expect to receive on your own Karma test.
As your test collects votes your Karma level gets used up, but you can vote again to to raise it. Alternatively, you also have the option to buy credits instead of voting. Photo tests using credits also receive votes faster.
1. Always be honest— it's the best favor you can do a fellow user.
2. Rate based on how you feel about the person, not the quality of the photo. (The latter is best addressed in the notes box.)
3. Ignore logos and watermarks. Most commonly: LinkedIn logos from profile photo imports; photographers' watermarks present when users are choosing images to purchase.
4. Don't assume that the person in the photo and the user who uploaded it are always the same.
In choosing our default traits, we asked loads of people what they think and feel about photos on LinkedIn, Facebook, OKCupid, etc. The preferences we collected were then narrowed down to foundational touchpoints.
We put a great deal of thought into our default traits, but that's not to say they won't ever change.
There's a lot of reasons why we use trait-based testing rather than asking voters to choose their favorite photo.
All that said, a system like ours is much more complex to build and run. (A "pick A or B" system can, for instance, collect 3 clicks and declare Photo B the winner, even in cases where those 3 clicks were from people who always click on Photo B.) But we at Photofeeler have always believed that it was worth it to do photo testing right.
The short answer is: it's not possible to vote again on the same test.
One factor at play here is that our system intentionally spaces out photos of the same person so that, by the time a voter sees a picture of someone they've seen before, the details have become fuzzy.
In most reported cases of seeing the same picture, we've found that the photo in question was just a little bit different than the original. Maybe it's cropped differently, it features a slightly different expression, or the clothes are changed. Alternatively, it is possible -- though less common -- that a user started a brand new test using a photo they've tested previously.
In any case, please continue to vote respectfully.
Photofeeler Ranks are a comparison between your photo's score and all the rest that have been tested on our site.
Photofeeler Ranks are given as a percentile. So, for instance, a rank of 58% means your photo did better than 58% of photos.
We take special care to take into account the number of votes received when calculating ranks. For example, it is much rarer to get an average competent score of +2.50 with 10 votes (97th percentile) than it is to get an average competent score of +2.50 with only two votes (81st percentile). A small sample of votes is more likely to score unusually high or low due to sampling error.
If you hover or tap on the section, a tooltip will tell you your current Rank's confidence intervals.
A confidence interval is a mathematical term. It means the range in which we're pretty certain your "true" Rank lies. (That is, the Rank your photo would end up with if you had thousands of votes.)
The more votes you add to your test, the smaller these ranges get. Note that photo tests with very split opinions will have wider confidence intervals and require more votes to obtain the same certainty.
If you hover over any individual notch, a tooltip will tell you precisely what it means.
Generally, though, these notches represent the voting style of the person behind the vote and how that vote was adjusted in calculating your Photofeeler Rank.
For instance, if someone voted 3 ("Very"), but they consistently rate photos very high for that trait, you'll see a notch on the far left of the bar. That means their "Very" doesn't count as much as the "Very" of a person who rarely leaves 3s for that trait.
By default, your photos' results will be shown by Photofeeler Rank (a percentile).
If you score 70% in Likability, for instance, it means your photo scored higher in Likability than 70% of photos in our database.
A rank of 50% is average, so you'll want to get at or above that level.
Knowing what we do about this, though, we believe that anyone can hit the 90th percentile (a 90% Photofeeler Rank or higher) in all categories with enough experimentation.
We've noticed that Dating users can get very discouraged by a photo's low Attractive % due to common misconceptions specific to measures of Attractiveness. Allow us to clear the air with some truths:
1. Our Attractive %s have nothing to do with a 1-10 scale
Many people assume a score of "20%" relates to a "2 out of 10" on a 1-10 scale, but these systems aren't actually related at all. Like all other traits on our site, Attractive Ranks are given as a percentile. A percentile of 20% means that your photo did better than 20% of tested photos in our database. It isn't necessarily bad— it just hasn't beat out 80% of the competition!
Keep in mind that it can be tough competition around here— especially when users test many photos with better and better results.
2. Different photos of the same person get very different results
It's foolish to test one photo and then assume that your result is prescriptive of how you're perceived in real life. Every photo tells a different story; no one photo will ever show you "as you are" enough to get a "true" rating. So remember: Photofeeler tests photos, not people!
No. Sometimes several users will send the same Quick Note, resulting in what looks like repeats. These notes were actually sent from different people.
Photofeeler's own co-founder/CTO has a PhD in Optimization Algorithms and experience writing artificial intelligence for Fortune 500 companies. The fact is, the way we interpret each other's faces is one of the most complex mental processes, and the field is a ways away from bottling all of the nuance involved.
What Photofeeler does with algorithms and machine learning, however, is monitor vote quality, detect all manners of voter fraud in real time, and use sophisticated score distribution analysis, accounting for factors like individual voter styles, to optimize the accuracy of test results. The consequence is statistical accuracy far beyond what a small number of votes could normally provide.
So get reliable results based on real people's feedback now, and who knows what AI our own team may cook up later. ;)
Nope. Voting on Photofeeler is virtually impossible to game thanks to sophisticated artificial intelligence that detects all manners of voter fraud in real time.
While voters who have either received warnings or been banned from our system have many theories for why or how this happened, the truth is our system is much more complex than any of these theories have yet accounted for.
The good news is, since activating these particular algorithms, low vote quality is basically nonexistent.