The Evolution of Trust is a cute explain-o-game about cooperation

The evolution of trust

Here’s something nice, but also depressing. I started playing The Evolution of Trust [official site], a short browser-based game, expecting it to show me why trusting other people is a good thing. Ten minutes in, it’s taught me that I need to cheat more.

It has you playing a quick ‘Game of Trust’. If you stick a coin into a machine, the person at the other side gets three coins, and vice versa. So, should you co-operate and play the slot, or cheat, withhold your money and hope the sucker on the other side is feeling generous?

It’s really a teaching tool for learning the (very) basics of game theory. It talks you through your options, takes you through different scenarios and discusses how trust can evolve over time, hence the name. In the coin game, it turns out that always cheating is the best option if there’s less than five rounds, but if there’s more the best way forward is to copy your opponent’s last action. I won’t explain all the reasoning and mechanics behind it – if you’ve got a spare half an hour then give it a go and find out for yourself.

It’s well-presented with scribble-y hand drawn characters and feeds my love of beautiful infographics. The music is a little annoying if you’ve got it running in the background, so maybe mute the tab if you get pulled away.

By the way, if you want some competition, I managed to get 27 out of 49 in the test, which is about five minutes in. Let me know how you fared!

50 Comments

  1. Paladin says:

    The only thing I’d like to have the presentation unambiguously write block on what is that today’s carebears are the main direct responsible for the proliferation of tomorrow’s assholes. In other terms, unconditional cooperation is *not* the morally good thing some people pretend it is.

    • Paladin says:

      Er, “black on white”. No idea what happened there.

    • Aetylus says:

      Actually that’s not what it says. It doesn’t discuss morals at all, it only discusses outcomes… and in gaming speak it would be something along the lines of: If you want the optimal experience for yourself, you should be caring to carebears, troll the trolls, and be just marginally nicer to other people that they are to you.

      • Paladin says:

        Yes, but unless your disagree the best outcome on a global level is the one where the average outcome is the highest possible and about every gets it (which is where I slip about morals, because I think it’s pretty decent of the part of it that is universal), you basically want to train a population that’s not comprised of unconditional cooperation. Unconditional cheating has a hard time surviving any strategy designed to account for its existence.

    • Artist says:

      You misunderstand and mix up “math” with “morals”. Apples and cars….

      • TechnicalBen says:

        It seems in trying to judge others morals, we can fail to realise the foundation point of ours can pivot and be different.

        Far too many people put *personal *”money/wealth/food/safety” over others as a moral or personal goal.

        What if other people want to give to others, regardless. Yes, some may be greedy, but we can then stop giving to those, and give to others.

        Yet personal gain seems to be the only “winning” end game for many.

        • Paladin says:

          Nah, if we have to assume anything to start thinking with models like game theory, it’s that human beings are way more rational than we think. In other terms, people don’t act morally for its own sake, but because acting morally have a personal positive outcomes of them makes. They do *gain* something, even it’s on a self-acutalisation or emotional level. It’s just hard to quantify.

          Then can try to do that and defend as it morally good. I just happen to disagree, but that’s why morals are partly relative, I suppose. I agree that the presentation maker maybe should stay purely the facual; but here’s the fact: unconditionally cooperation favours the evolution of further and further uncooperative behaviours.

    • corinoco says:

      Your first sentence makes little to no sense. Ah, good; you noticed that.

      However, the articel made no mention of morals (as others have pointed out) and also misses the important factor of context.

      Replace ‘money tokens’ with say, food; and have both players starving; all of a sudden it’s a very different game.

      You don’t ‘win’ games like this; the ‘win’ condition is a functioning and long-term viable society (and thus species). The ‘lose’ condition is extinction. Homo Sapiens currently appears to be aiming for the second.

      • Paladin says:

        Changing coins for food in a starving situation adds some pathos to the story, but that’s just the same game with different payoffs that punishes uncooperation more harshly.

    • Kitsunin says:

      Even this is much less black and white than you are implying. If your goal is to maximize happiness among everyone, then being a “carebear” is in fact the best move in the short term. You get screwed by the assholes, but you still increase overall “points” but letting them screw you…on the other hand, unrealistic as it is, if nobody has a strategy which involves screwing others, the most effective strategy period is actually to be entirely altruistic: Even tit-for-two-tats suffers slightly on the rare occasion when two mistakes are made in a row.

      • Paladin says:

        “If nobody has a strategy which involves screwing others, the most effective strategy period is actually to be entirely altruistic.”

        If by that you’re saying “if nobody has a strategy than involves cheating, even by retaliation”, I’m pretty sure I saw entire populations of cooperative people slaughtered in the presentation.

        If by that you’re saying “if nobody has a strategy than involves cheating by default, but that could happen under some trigger”, well sure, unconditionally coooperative and conditionally cooperative people would be undistinguishable from each other. But your assumption basically starts with “if mountains are made of ice cream”.

        • Paladin says:

          I have no idea why I missed the “short term” part of your argument. My apologies.

          That’s acutally a pretty decent point. Then the problem I have with that we’re all more or less subject to some form of temporal discount that makes us undervalue long-term outcomes. Which is why climate change is so hard to have people take seriously but “tomorrow: moderate heat wave that about all of us will survive” will make everybody prepare for it.

          So all in all, I understand short-term benefits. I mean, if only long-term ones had any value, one could take very cruel short-short term solution to deal with human overpopulation, for instance. And of course people being unconditionally cooperative aren’t being evil. Just maybe as irresponsible (or, if we want a least harsh term, short-sighted) as parents who endlessly spoil their children while putting no sense of limit, while the rest of the world will have to deal with the grown ups they’ll become.

    • poliovaccine says:

      See, the thing is, morals are not descriptive, they are proscriptive. They describe how we *want* things to be, or how we believe they *should* ideally be. Obviously they dont bear out ideally in every scenario, but far too many people use the potential for others to screw them as “moral” license to screw others first. But “an eye for an eye leaves every man blind.”

      It’s like how everyone in the world thinks Darwin coined the term “survival of the fittest,” and that in turn “evolution” is synonymous with having the biggest baddest claws and using em first. But he never created that term, and in fact fought the misconception during his lifetime, because his theory of evolution showed precisely the opposite – that animal societies which prioritize caring for their weak and infirm and elderly and sick benefit overall in the long run from their presence, because believe it or not life consists of more than just the pursuit of shelter, sex and food. There’s this whole social aspect we actually need fulfilled just as badly as more physiological needs. Case in point: solitary confinement.

      Cooperation is what allows a species to evolve, to reduce natural predation to the point where evolutionary divergence within that species can safely occur, because evolution is ultimately adaptation, and prey must adapt against predators, but where predation can be reduced, everything thrives, and in isolated areas where there is virtually *no* predation, like Madagascar or the Galapagos or Socotra, evolution is able to unfold in all kinds of unique and otherwise unseen ways. And that’s where forward evolution really occurs – not in the prime example of the species as it stands today, but in the mutants with a compelling new ability they’ve adapted out of their mutation. Back in school you get taught that, “the zebra evolved his stripes to help confuse lions when they’re all together in a pack,” but in reality the zebra didn’t “do” that “for” that or any reason – rather, horselike animals existed, they traveled in herds, at some point one was born with stripes, it reproduced, the stripe trait was dominant, and the fact that it combines with their herd behavior to make individuals difficult to pick out of the group to a lion’s eye is the convenient factor that leads to there still being zebras today – whereas the variety of horselike animals who mutated to be neon pink with a big red target painted on their side just didnt have the same luck. The point is, mutation, essentially the odd ones, the freaks, they’re how a species actually moves forward in evolutionary leaps and bounds, and they are only possible in a safe and peaceable enough environment – i.e. one in which “carebears” facilitate their existence.

      Incidentally, I would contend that maintaining this conception of people who are willing to trust and refrain from cheating others for personal gain as “carebears” is actually what makes you an asshole, not so much “being facilitated” by them. I think if you believe in a selfish ideology you’re more likely to be a selfish individual – all the moreso if you think that immediate practicality and material benefit identifies moral correctness. Likewise, if you believe in aspiring to the moral high ground in spite of the obvious limitations and complications of living in the third dimension as material beings, you will actually see that happen sometimes, certainly more often than the alternative.

      There’s a quotation, cant remember who says it but it’s that, “Compassion, though noble, is often misplaced.” I would have to agree with that from any number of angles, but that hardly means compassion has no place in the world whatsoever. It remains noble, after all. It just means you should try not to *waste* it on someone who will selfishly take it as a sign of your own weakness, or feel no moral need to reciprocate. Someone who’d just call you a carebear about it. Those folks can solve their shit on their own if that’s what they want – meanwhile, communities form and outnumber them. If you want to put it into the mistaken-ass terms of “survival of the fittest,” the fittest in this case isnt the one badass tiger with the big teeth and claws, no, the fittest in this scenario is the group of soft, pink, fleshy humans who have no claws of their own but who have instead formed a society, pooled their resources and abilities, collaborated towards things they could never achieve on their own, such as inventing guns, and who also outnumber the tiger, and hunt him in whatever-sized group is sufficient to protect themselves and take home the tiger pelt at the end of the day. I mean to me this much is obvious, but of course differe t people take different things for granted. Still, the whole attitude of “altruism is irrational” just reminds me of that line from Indiana Jones… “I’ve got you two surrounded!” Haha, I mean of course Indy makes it out alive, but that’s cus he’s the hero of the movie.

      Incidentally, this game would be cool to set up in real life, stick it in a city or something, like put a two-sided coin machine at Grand Central, let strangers hop on and “play” with quarters or dollars, record the results. You’d have to prevent people from playing who knew each other, so they couldnt just collaborate towards producing the max amount of coins and then split the take later, but I almost wouldnt want to, and instead let it happen and watch, cus that’s just one of those wrinkles that would make it interesting to do in real life. It’d be cool to see how it pans out in the context of actual humanity, with all its unique and unpredictable little complications, since that’s sort of where this game’s model meets its limit. Be cool to see if results differ between putting it in Grand Central Station vs. a small town where everyone knows or at least recognizes each other. Ultimately, the game is a little too abstracted to be completely meaningful, since this scenario in real life could/would have all kinds of variables playing into it that just cant be represented here, but it’s definitely a useful little demonstration of the concept in itself.

      Also, does a good job of demonstrating how “one bad apple ruins the bunch,” in the sense that if one person is trying to be charitable, and the other person is trying to screw em, it’s like multiplying by zero, screwover guy always gets more coins, so as soon as you realize you’re dealing with screwover guy you need to fight him w his own tactics. You may think that proves that screwover guy has the better tactic… eeeexcept that if *neither* of em tried to screw each other, they’d *both* make out with more.

      • shde2e says:

        I would just like to say that I agree with basically everything you said here, and that you did a far better job of it than I could ever hope to do :)

        Also, your last paragraph made me realize that this type of screwover behaviour looks a lot like slash-and-burn farming. Sure, at first the harvest is bountiful, and better than if you tried a more moderate farming technique. But as soon as the initial fertility runs dry, the whole thing becomes a barren wasteland and now nobody can grow their food there anymore. While the people using more cooperative and sustainable techiniques can enjoy the land for centuries to come.

        Also, vaccines are great.

      • Paladin says:

        OK for the proscriptive over descriptive part.

        Then you make a very long post to basically explain that cooperation is desirable, which at no point I have disgreed with. (The key word here is: “unconditional”.) Also of course the example used people in symmetric places, but not everybody is in a social, psychological or physical condition to cooperate as much as others. Thus welfare is a great thing. Taking care of homeless people is a great thing. Supporting people under discriminative prejudice is a great thing.

        You hint at aspiring to the moral high ground as something different. But actually the moral high ground rewards us with self-satisfaction, with actually turns out to be a currency like any other in game theory. So basically you can trick yourself to change your perception of the outcomes of the game so that by being unconditionally cooperative you still feel like everybody’s winning, when some external observators may state that they see you getting screwed. And the person screwing you is reinforced to do so. Is that moral? Is the fix to undesirable behaviours not to slap the hand of wrongdoers (while still giving them the chance to regain our trust) but to shift our perspective so that we find we find satisfaction in what we formerly considered as being exploited? That’s quite terrifying to me and hard to justify as good.

    • datreus says:

      Without presenting too much conjecture as to how you could come up with such a fallacious statement, I would suggest you consider the fact that your value system here is one that you were given (probably as a child) precisely to ensure that ‘assholes’ get to flourish in the world.

      To put it as simply as possible, people are ‘assholes’ because of fear (or one of its sub-emotions). That is the motivational force (outside of severe mental illness) for negative actions.

      Being a ‘carebear’ doesn’t create this in people. There are a massive range of other things that will (almost all things from the socio-political side of the spectrum your point of view originates from), but trying to engender unconditional co-operation is not one of them. There is a very strong pushback from the social structures on the conservative side of the fence against co-operative modes of thought, as unselfishness is anathema to their operation. The reality of course is that if people overcome fear and use reason, co-operation is the most effective means of resource maximisation which allows the best sustainable course to be charted. The only circumstance where it is not is the extremely rare situation where two human beings are in a situation where there is quite literally only just resources to barely sustain one of their lives.

      So unless you’re talking about two people in a desert with a water bottle that has just enough water for one life, as opposed to the near infinite situations where this is not the case, then co-operation is indeed the best course of action.

      tl;dr Nice people don’t create assholes. Other assholes do.

      • Paladin says:

        “To put it as simply as possible, people are ‘assholes’ because of fear”

        Citation needed. Fear is probably a strong motivator for some behaviours we judge as negative but, that doesn’t mean every behaviour we judge as negative is due to fear; unless you’ve mistaken Star Wars for psychology classes. I can see the headline from here: “Misunderstood rich people are being greedy at our expense out of pure terror.”

    • ColonelFlanders says:

      I couldn’t disagree with your statement any more if you’d suggested murdering all your family at the end. Assholes will be assholes whether you trust them or not, the only thing you can do to make sure you don’t become one yourself is to make sure you treat people the way you wish to be treated. Sure you might get screwed by a few people over the course of your life, but honestly what moron would keep trusting the same guy who fucked him? The goal of this game is not to get everyone to trust every single Nirgerian Prince that sends you an email, it’s trying to demonstrate that trust and cooperation is by the numbers a safe way to live your life, while hopefully maybe perhaps making the world a less shite place to live. You’ve gotta be the change you want to see otherwise you just exacerbate the shittiness.

  2. Aetylus says:

    That is certain the clearest explanation of the prisoners dilemma I’ve seen… I’m now convinced the default method for teaching systems behaviour should be nice scribbly online games.

    • Ghostwise says:

      Serious games have been used to teach system dynamics since Forrester’s days. :)

  3. Landiss says:

    Nice little game thing. Also, you can also turn off music in the bottom left corner.

  4. AndreasBM says:

    Got 40/49, looks like I have a talent for exploiting people’s trust!

  5. Chaomancer says:

    I also got 40/49, and I didn’t exploit anyone’s trust :)

  6. gulag says:

    Great explainer of a sometimes difficult to illustrate set of issues. If I have one observation to make regarding the game theory approach to solving this problem is that I have never seen the issue of inter-player communication addressed. In the course of several rounds I may discover you are a particular type of player and adjust accordingly, but there is never any discussion of what happens if I then tell other players what I have discovered. A reputation is a powerful tool for enforcing trust, and is probably hard to simulate in the framework of these models, rendering them useful, but to my mind, incomplete.

  7. Babymech says:

    If you are too busy to actually figure it out for yourself, and just want the see the Good ending, the correct sequence is Trust – Cheat – Trust – Cheat – Down – Up – Left – Right – A – B.

  8. Vacuity729 says:

    I discovered this a few days ago (and scored 38, and, it turns out, was pretty much following the “copykitten” approach), then used it in a class with a student, who cheated way more and only scored 30.
    As the commenter above mentions (and in fact the presentation does too) though; without a system to describe reputation, it’s sadly incomplete.

    • sticklander says:

      There are other ideas they could have included as well, easier to build than reputation imo:

      * Sometimes it’s easier to cheat than to cooperate. What if there were different probabilities for mistakenly cheating, versus mistakenly cooperating?
      * Besides cooperating and cheating, what about a third option: abstaining? This is known as the optional prisoner’s dilemma. How would it affect people’s decisions if they saw someone else abstain rather than cooperate? if they knew they could abstain if the other cheats?
      * What about dividing these groups into communities? People within each community would communicate with each other much more than outside; add in some probabilities for how often they talk to someone outside, and how likely they are to spread to other communities (adding a top strategy from a different community).
      * What about spontaneously new strategies? How likely is it that a cheater arises out of nowhere — or a detective, or a copycat?

  9. Kollega says:

    A friend linked this to me a few days ago. However, what I carried away from the game is the direct opposite of “depressing” and “I need to cheat more”. Oh no, not that at all. What I carried away was “the propaganda of life as a zero-sum game breeds mistrust like nothing else” and “don’t be a selfish asshole if you want to make lifelong friends”. I got something like 38 points in the first minigame just by cooperating first, and cheating only if the other player does. And while in my life, I have not that many friends, most of them are very close to me, and I can trust them. Because I’ve actually taken some time to get to know them as people.

    To sum it up: it’s not “depressing”, it’s “we can only fix the lack of trust in our society by working together” – which is, of course, difficult by definition, but if we don’t, it’s just going to get worse and worse. A fitting description for a lot of the current issues, really.

    • RanDomino says:

      This is exactly right. The solution to the iterative Prisoner’s Dilemma is to trust until the other side betrays, then betray every time (with a few trusts thrown in here and there if they trust again, just to give them a second chance). This results in both the best individual scores and the best collective scores.

      IRL, if you only have one chance to trust or betray, you’re going to do so based on your history with the other person.

      The social lesson is that capitalist alienation, in which every interaction is a one-off transaction between strangers, destroys trust necessary to build society.

    • datreus says:

      Spot on. It’s almost as if certain segments of our society want that negative take on things to proliferate for their own ends.

  10. malkav11 says:

    Fundamentally I have never seen any reason to “cheat” or “betray” in most configurations of this game. The idea seems to be that successfully betraying someone is the optimal outcome for you, but it isn’t really because you’re far better off with the mutual cooperation outcome than you are with mutual betrayal and in any number of iterations beyond one, betraying once is going to pretty much guarantee you’ll achieve the latter forever afterwards. Furthermore, the extra benefit to you is usually negligibly more than the cooperation outcome and the penalty to your partner is generally negligibly worse than the mutual betrayal outcome. The numbers need to be way different before it makes any sense. (or you need an outcome like the betrayed player dying to incentivize not being that person).

    • Someoldguy says:

      We had that version presented on a corporate team building day. You were asked to “do everything you can to maximise the score” and they waited to see what people would do. The only way in your control to do that was to keep cooperating. Despite some individual members being keen to betray on the last move, both teams went for cooperation.

    • poliovaccine says:

      See, I think that’s correct except for the idea that this is “supposed” to indicate that selfishness is the most beneficial strategy. I’m sure some people believe that’s what this is “supposed” to say, and will assert as much, but like you say yourself, the best outcome is one in which both parties cooperate. Whatever the results are, that’s all this little model is “supposed” to show, and that’s it. It shows not just the objective end result, but also how different people interpret the problem. Some people will cheat, see the immediate gain, and pursue the notion no further. Some will continue to give even when they know the person on the other side is screwing them. Cant do anything for those people, they have their reasons. But as you observed, the way to wind up with the most coins is to cooperate. Also, I just dont see anything about this which implies any particular outcome is its intended outcome, or is somehow correct. Basically, I just disagree w the notion that this is meant to illustrate the efficacy of cheating… not the least of which reasons being it *doesnt* actually show that haha, or rather, it shows the efficacy of cheating is low compared with the efficacy of collaboration.

      From seeing the prisoner’s dilemma taught in class, though, I know that in spite of those objective outcomes, some people will absolutely perceive this to be a dog-eat-dog validation exercise, while others will see it as an exercise in validating altruism. It’s really neither, it’s just a microcosmic formula for inputting different interactions and seeing their broad results. Its answer depends on its participants. Some people may feel it’s irrational to even hope for collaboration, and so theyll be quite content with the more meager gains of the cheating route. Rather like real life.

      • malkav11 says:

        Whenever I see the prisoner’s dilemma presented (typically in fiction), it skews towards treachery and selfishness with the idea that this is somehow beneficial according to the rules of the game but the rules almost never support that conclusion. That’s all.

        • walruss says:

          In a single iteration of the Prisoner’s dilemma, you get the best outcome possible for you (a Nash Equilibrium) if you cheat, no matter what your opponent does. So considered in isolation, with no other factors (like a shared morality, reputation, etc), you should always cheat. Even if you know your opponent, and know he’s going to take the trusting option, you’re better off, from a strictly material standpoint, to cheat if you’re only going to play the game once.

          This model has no relationship to reality. There is no circumstance in reality in which you 1) Don’t care about the other person’s fate, 2) Don’t care about your personal moral code, 3) Know for a fact you will never interact with them again, 4) Know that your betrayal will never be reported to others, and 5) Don’t anticipate any consequences for the entirety of the human race if this ethos is internalized. The model is talking about simple numbers. It’s a jumping off point for defining and discussing how to deal with the collective action problem, it’s not intended to have real world application in this form.

        • walruss says:

          Hate to double comment, but it occurs to me that this might also be related to the way sociopaths in fiction are always portrayed as calculating evil geniuses who get what they want through cold, evil logic. Logic and rationality are almost always presented as things that do not allow for morality and empathy. People who lack empathy are shown as not being tied down by traditional morals and therefore able to relentlessly pursue their own interests with cold, calculating focus.

          But we know that in reality, sociopaths are almost always below average IQ, and an inability to reason well is a symptom of sociopathy. I suspect the dissonance is caused by this assumption that logic and empathy are opposites, while this demonstration shows (in a simple form) that empathy and the ability to understand how others think is really very logical.

  11. TechnicalBen says:

    It is nice to have a positive view and goal like that.

    I also think it misses a point to some extent. Yes it shows that some systems have difficult or unintuitive “ideal” strategies. However we often mistake the “black box” for being unchangeable.

    Which sadly often results in people taking horrible last ditch actions. Instead we can look at our possible positive actions and possible changes to that horrible money grabbing black box. As that is the system that allows and facilitates cheeters.

  12. sagredo1632 says:

    There used to be a contest to write AI for the repeated prisoner’s dilemma. I forget the tournament format, but the point was for the AIs to play each other and suss out winning strategies. I believe the repetition format was random stopping time, but I could be wrong. IIRC the winning strategy was either the tit-for-tat, or the once-betrayed-never-forgive.

  13. Stevostin says:

    This strikes me as something that can seems like its important and meaningful while it’s probably none of that. Games theory is fun for smart people and it has that “oh, now I get the world” vibe but the simplifications necessary for the thinking to exist seems to me like they’re already way too much to allow for any predictive model.

    But maybe the ability to predict outcomes is already established and I am just ignorant. Hmm, time to wikipedia this !

    • sagredo1632 says:

      You shouldn’t judge game theory as a whole on just this one simple model. That would be like learning 1+1=2 and then asking “So then what is math good for?” If money is your metric of utility, contract theory, auction theory and some fields of industrial organization all fall under the header of “game theory” and are all lucrative as consulting work (e.g. in union and trade negotiation).

  14. BaronKreight says:

    If only real life was that simple – cheat/cooperate/black/white.

  15. castle says:

    The UK game show Golden Balls was also inspired by the prisoner’s dilemma. Radiolab did an excellent piece on Golden Balls and game theory. If you google “Radiolab Golden Rule” it’ll be the first result. Definitely worth a listen.

  16. anon459 says:

    I’m just here to say that RPS is super dope for hosting an article about this game.

  17. hprice says:

    … and not one mention of Sesame Street. What a sad world we live in.

  18. walruss says:

    I really loved this, and I’d like to try a few experiments with this system:

    1) What if tournament participants could gain knowledge (sometimes false) of their opponents’ behavior in prior tournaments. It’d be interesting to see how reputation ideas spread and fought for dominance. That’s a whole other game unto itself and would be difficult to implement, but it would give a more realistic model, as other commenters have pointed out.

    2) What if tournament participants had a chance to be swapped out between rounds of individual tournaments, with or without the other participant’s knowledge. In other words, the copycat is busy copying the cheater during a tournament, and suddenly is instead playing against another copycat. You can see how this turns interactions that could be positive very sour, very quickly. This would more accurately model online interactions, where we interact with dozens of strangers a day and are likely to carry over our experiences in one interaction to the next, even when the interactions are not technically “anonymous.”

    3) What if we allowed the amount of points gained for cooperating/cheating to be variable. How does that affect the outcome? Is there a player type, not included in the base game, which could exploit that uncertainty?

    I’m sure there are a million more. I’m tempted to ditch my current project and focus on this instead.

  19. Podarkes says:

    It’s beautiful. Thanks for sharing.

  20. disconnect says:

    It’s time for some GAME THEORY