Skip to main content
  • Spring Editorial: You Can’t Always Believe What You Hear


    When I was about 7 years old, I watched as many small fish in a lake swirled around in a school.

    “Why do fish school?” I wondered. Their coordinated motion mesmerized me. I asked my mother why they did that. She was a Harvard educated geneticist with great academic bona-fides at a time when this was very rare for a woman. She told me and later repeated that there was safety in numbers.

    I asked something like, “Will the small fish gang up and attack the bigger fish coming to prey on them?”

    “No”, she answered, “they’re not that well-organized. But with so many small fish the odds are better for each fish not to be the one who is eaten.”

    7306-so-spring-perspective-promo-300x300-f-20240509-4

    People say fish school for self-protection, yet their prey encourage the schooling. How can it be both good for predator and prey?

    Every few years, I would ask her again, since for me there was something wrong with that reasoning. And at school I asked my biology teacher the same thing and got more or less the same answer. I’ll call this answer the “Bear Answer” since it’s captured in an old joke: Two men were camping out in the wild when they came across a grizzly bear who started to give chase. One man quickly tied on his running shoes and the other said to him, “What are you thinking? You can’t outrun a bear.” The man with the new fast shoes, answers. “I don’t have to outrun the bear; I just have to outrun you.”

    I’m going to explain in this editorial why the “Bear Answer” strategy might work with a grizzly giving chase but doesn’t really work for fish or in lots of similar scenarios. But the bigger point is that everyone keeps repeating the wrong answer, though I admit it is attractive at some level. And humans are very vulnerable to succumbing to the thinking that common knowledge must be right and hence is common sense and thus is wisdom. Let’s look at the evidence for why this thinking, both specifically and more globally, is wrong and then we can consider the new threat that comes from ChatGPT and other forms of artificial intelligence-derived information.

    One day, about 20 years ago, I had an epiphany. I remember distinctly where and when. I was walking through the University of Southern California campus and talking by cell phone to my brother who was, at the time, a professor of physics in Atlanta. In states like Georgia there are cicadas. These insects can be rather loud, and I had trouble hearing my brother for the background noise that they made and that I recognized. I had spent a few years living in the South and I knew the buzzing of cicadas well. Cicadas are interesting.

    Cicadas are large bugs with chunky bodies who make a lot of noise to attract mates. Their noise attracts birds, and they make no attempt to hide. They live in temperate to tropical climates. The North American variety spend most of their lives as underground nymphs. But what is amazing and awesome, is that they emerge from the ground or from under bark in set intervals of 13 (or 17) years, depending on the location. They “disappear” for 13 years and then pop up to eat, grow and have sex for a few weeks. Why? How weird is that? They are plump and juicy and make no effort to escape the many birds that gorge on them when they come out. And, by the way, they never pop up in the wrong years.

    My epiphany was that the important year was not the year they came out. It was the intervening 12 years that deprived the birds from feasting on them. Hence, no bird could depend on this diet or, worse, evolve to become a cicada-eating specialist. Those 12 years made cicadas less reliable as prey, and any predator that started to evolve to depend on cicadas would starve waiting for their emergence. My mind immediately thought of the fish school — that school.

    Where else do we see this? In World War II, America sent large numbers of merchant marine ships to England, and, later, ports along the north coast of Russia. German submarines lay in wait to sink as many ships as they could. The U.S. strategy was to concentrate destroyers near our east coast ports and also near the ports of call in England. In between, we did something very strategic. We massed the boats into large convoys.

    This was thought by some to be foolish as there were only two or three destroyers that could be assigned to each convoy, and this was not sufficient to screen against the many wolfpack U-boats. The merchant ships were hardly armed at all and could not put up much of a fight. They were sitting ducks and more visible for being in large convoys. And the strategy was indeed controversial. But wise heads prevailed.

    The convoys weren’t so much to protect the merchant ships after being found; they were to make them less likely to be encountered. The ocean is huge and by concentrating and traveling in zigzag, they were less likely to be found by the German wolf packs. So, ships school as convoys as do fish, not to fight off the predator and not even to reduce the odds of a given loss, but to lessen the total numbers of encounters in the ocean.

    A 7-year-old today wouldn’t ask his mother or teacher, but rather check with ChatGPT. And what does ChatGPT (4.0) say?

    Q: Why do fish school? “Protection: By swimming in groups, fish reduce their individual chances of being targeted by predators. Being in a group means that the chance of any one individual being caught is reduced. This is known as the "dilution effect." If a predator attacks a school, the risk of being the one caught is spread out over many individuals, thus reducing the individual risk for each fish.” That’s pretty much what my mother said.

    More importantly, it is not surprising, given how many people say the same thing in print. But that’s the thing. You can’t trust ChatGPT won’t confuse frequent answers with correct ones. I suppose I’m being arrogant. After all, although it’s true that I’ve been thinking about this for almost 70 years and ChatGPT 4.0 took less than one second, it’s obviously a lot smarter than me. After all, it has access to the entire internet.

    But ChatGPT is wrong. The dilution effect would work regardless of schooling. The large ocean is filled with fish and if the dilution effect works along the lines of the ChatGPT argument to frustrate the predator fish, then the predators would not have evolved to encourage it. In fact, fish like tuna work together to encourage the schooling of their prey. And so, do killer whales.

    Orcas swim in circles around the school to pack it tighter and tighter before plunging in to get their fill. How could it be advantageous to both the prey and the predator for schools to form? This is a zero-sum game. As I suggested with cicadas, it’s about creating gaps in space and time that is the secret. Millions of years of evolution can’t be wrong.

    ChatGPT doesn’t understand rules or reasoning. The reason schooling works for fish is not dilution but absence. There will be larger tracts in the ocean devoid of fish. Predators wandering through the ocean will encounter fewer prey, and this will limit their growth and fecundity. There will be fewer predators in the ocean. Just as there were fewer birds that became devoted to cicada eating. Absence not only makes the heart grow fonder but keeps the predators in check. But I couldn’t get ChatGPT to understand.

    ChatGPT is wrong. My mother was wrong (she was just reflexively repeating the answer she heard from her teachers — when I later presented my thinking to her, she agreed and doodled out the math that modeled the situation). Many teachers who passed along the wrong group were also wrong.  But people are more likely to believe ChatGPT.

    One big take-home point is that wrong answers are ubiquitous, especially when they come from people who aren’t the experts. Scientists spend a lot of time questioning their assumptions. Most others don’t.

    Don’t believe all conventional answers. Truth is not democratic; however, we are now in a world where popular answers, fueled by the internet, are given credence over deep thinking from well credentialed authorities. Several times, I’ve encountered patients who just didn’t believe me, even in areas where I am an expert and have published extensively. When I asked them why they were rejecting my conclusions, they answered, “I’ve done my own research.” By this they meant they had researched social media.

    I hope that those long years of PhD training taught me what real research is like. And it’s hard. You have to read a lot of references, compare them, toss out those that come from questionable sources, understand ascertainment bias, logical errors, and the limits of extrapolation from the population studies to the more general population. Does ChatGPT do that? I don’t think so. ChatGPT is a "stochastic parrot."

    This is an apt term coined by Emily M. Bender at the University of Washington to remind us that ChatGPT is pretty random and repeats what it hears (without understanding). ChatGPT and other tools of AI will provide us with more and more information and conveniently place it at our fingertips, but we will lose the ability to filter it for significance, depth or even truth. With ChatGPT we cannot separate the wheat of truth from the chaff of nonsense. And like the patient who calmly explained to me that she didn’t need my expert opinion because she had already done her own research, most people will fall into the lazy habit of lowering the bar of truth. They will accept mediocre and even faulty knowledge without reference to authority, expertise, and deeper thinking. At best, everyone will be a jack of all trades but master of none. At worst, we’ll all be like kindergarten children mindlessly repeating the last thing we heard on the playground.

    Socrates (as reported by Plato) said that in a democratic society of equals, knowledge will become a “corruption of the majority and people will make ill-informed and foolish decisions.” After more than two thousand years, ChatGPT may prove him right.