Strona główna Edukacja Is AI Capable of Solving Lateral Thinking Puzzles? | przegladursynowski.pl

Is AI Capable of Solving Lateral Thinking Puzzles? | przegladursynowski.pl

13
0
Is AI Capable of Solving Lateral Thinking Puzzles? | przegladursynowski.pl
Using lateral thinking to solve problems. (AAD/Firefly)

Is AI Capable of Solving Lateral Thinking Puzzles?

For years now, we’ve been told that AI can reason. It can pass exams, diagnose problems, write software, summarize philosophy, and argue both sides of a debate with unsettling fluency. In many domains, it already performs at or above human level. So it’s easy to slide from “AI is very capable” to a stronger, quieter assumption: if something looks simple, AI should handle it effortlessly.That assumption cracked one afternoon when I recently pulled a book off my bookshelf titled The Ultimate Lateral & Critical Thinking Puzzle Book: Master Your “Thinking-Outside-The-Box” Skills (2002) by Des MacHale, Michael A. DiSpezio, and Paul Sloane.Leafing through it, I was reminded of how reliably these puzzles expose a gap between intelligence and insight. The questions are short, the language is plain, and the answers — once you see them — feel obvious. Yet they routinely leave readers staring into space, wondering how they missed something so simple.Flipping through the pages, I had a more interesting thought than “Could AI solve these?” I wondered how it would fail if it couldn’t. These puzzles aren’t about calculation, memory, or technical skill. They’re about assumptions. They reward you for realizing that the situation is not what you think it is. And that makes them an unusually clean test of something we don’t often measure in AI systems: lateral thinking.What lateral thinking is, and why it’s differentLateral thinking is a way of solving problems that does not move straight ahead from facts to conclusions, but instead looks sideways. Rather than asking “what follows logically from what I know?”, it asks “what else could this situation mean?” or “what assumption am I making that might not be true?” The goal is not deeper reasoning within a frame, but escaping the frame entirely.In conventional problem solving, progress comes from refining analysis: gathering more data, applying sharper logic, or working more carefully through known rules. Lateral thinking takes a different stance. It assumes that the obstacle is not a lack of intelligence or effort, but the structure of thought itself. If the structure is wrong, more reasoning only reinforces the error.Press enter or click to view image in full sizeEdward de Bono (Wikipedia)The term lateral thinking was introduced and popularized by Edward de Bono, a Maltese physician, psychologist, and writer who spent much of his career studying how people think rather than what they think. Trained in medicine and psychology, de Bono became interested in the limits of traditional intelligence and logic tests. He observed that highly intelligent people often failed at simple insight problems, while others with less formal training sometimes solved them quickly by reframing the situation.De Bono argued that many problems remain unsolved not because they are inherently difficult, but because people approach them from overly rigid angles. In his view, education trains people to be excellent at vertical thinking — step-by-step reasoning, analysis, and proof — but gives almost no tools for deliberately restructuring perception.His early and influential books include The Use of Lateral Thinking (1967), which formally introduced the concept, and Lateral Thinking: Creativity Step by Step (1970), where he presented exercises and techniques for deliberately provoking new perspectives. Later works such as Six Thinking Hats (1985) and Serious Creativity (1992) expanded his ideas into practical frameworks for business, education, and group decision-making. Across these books, de Bono emphasized that creativity is not a mysterious talent but a skill that can be practiced by disrupting habitual patterns of thought.This stands in contrast to logical, analytical, or “vertical” thinking. Logical thinking proceeds step by step, following defined rules and accepted premises. Each step must be justified by the previous one, and conclusions are only as valid as the starting assumptions. This mode of thinking works beautifully when the structure of the problem is clear and the relevant information is complete, as in mathematics, formal logic, engineering, or technical troubleshooting.Vertical thinking is efficient, disciplined, and reliable — but also conservative. It is designed to move forward within an existing framework, not to question the framework itself. When the framework is wrong or incomplete, vertical thinking tends to dig deeper rather than look elsewhere.Lateral thinking problems are deliberately different. They are often under-specified, ambiguous, or even misleading. They rely on misdirection not to trick the solver, but to expose how quickly the mind fills in gaps with default assumptions about how the world “normally” works. The “aha” moment usually comes not from more careful reasoning, but from realizing that the original interpretation of the problem was mistaken or unnecessarily narrow.In de Bono’s terms, lateral thinking is about changing perception. Logic works on perception once it is fixed; lateral thinking works to change the perception itself.Press enter or click to view image in full sizePuzzles (AAD/Firefly)A contrast: logical puzzle vs. lateral thinking problemConsider a simple logical puzzle:Logic puzzle: A train travels at 60 miles per hour for 2 hours and then at 30 miles per hour for 2 hours. What is its average speed over the entire journey?To solve this, a person proceeds step by step:Calculate distance for each segment (60 × 2 = 120 miles; 30 × 2 = 60 miles).Add the distances (180 miles total).Add the time (4 hours total).Divide distance by time (180 ÷ 4 = 45 mph).The structure is clear, the information is complete, and success depends on applying known rules carefully.Now contrast this with a lateral thinking problem.Lateral thinking problem: A man lives on the tenth floor of a building. Every day he takes the elevator down to the ground floor to go to work. When he returns, he takes the elevator to the seventh floor and walks the rest of the way up — except on rainy days, when he rides all the way to the tenth floor. Why?A purely logical approach might try to infer mechanical rules about elevators or schedules, but progress stalls because the real constraint is hidden. Solving it laterally involves a different set of steps:Notice the inconsistency (why only the seventh floor most days?).Question implicit assumptions (that the man can always reach the elevator buttons).Reinterpret the situation (consider physical characteristics of the man).Reframe the problem (the man is short and cannot reach the tenth-floor button, but can reach it with an umbrella on rainy days).The solution does not emerge from deeper calculation, but from changing how the situation is perceived.Lateral thinking puzzlesDe Bono emphasized that lateral thinking is not just for riddles. In real-world contexts — business strategy, product design, scientific research, or social systems — the “problem” is often defined by conventions, traditions, or unexamined assumptions. A lateral approach might involve deliberately challenging those assumptions, introducing random stimuli, reversing goals, or temporarily adopting absurd alternatives to loosen rigid patterns.Where logic asks, “How do we optimize this process?”, lateral thinking asks, “Why does this process exist at all?”Where logic asks, “What is the best option?”, lateral thinking asks, “What if the opposite were true?”Both modes of thinking are essential. Logic refines and validates ideas; lateral thinking generates and reframes them. De Bono’s central claim was not that logic is flawed, but that without tools for lateral movement, even the most intelligent thinkers can remain trapped in perfectly reasoned dead ends.Thus, lateral thinking problems are deliberately different than vertical thinking problems. They are often under-specified and ambiguous. They rely on misdirection, not to trick the solver, but to expose how quickly the mind fills in gaps with default assumptions. The “aha” moment usually comes not from more careful reasoning, but from realizing that the original interpretation was wrong. With that in mind, I selected twenty puzzles from the book and submitted them to ChatGPT, one by one, without hints or follow-up nudging. Below are the questions, the correct answers, and ChatGPT’s responses.1. A male spent three days in the hospital. He was neither sick nor injured, but when it was time to leave, he had to be carried out. Why?Correct answer: He was a newborn baby.ChatGPT’s answer: He was a newborn baby.2. The police put up notices warning the public about a certain type of crime, but this actually helped the criminals. How?Correct answer: Police put up notices “beware of pickpockets.” The pickpockets stood near the sign and noticed that when people saw it, they immediately checked that their purses and wallets were safe. The pickpockets then knew where their victims carried their purses and wallets, which made them easier to steal.ChatGPT’s answer: The notices revealed how the crime was detected, helping criminals avoid being caught.3. A ship sank in perfect weather. If the weather had been worse, it probably would not have sunk. What happened?Correct answer: The ship struck an iceberg in calm seas. If the weather has been worse, then the lookouts would have seen waves hitting the iceberg or heard the iceberg. Wind causes icebergs to produce a range of sounds, from high-pitched sizzling and crackling (known as “Bergie Seltzer”) caused by the release of ancient compressed air bubbles as they melt, to deep groans, moans, and thunderous booms from shifting, cracking, or calving.ChatGPT’s answer: The ship ran aground in calm water; rough weather would have lifted it free.4. A driver had car problem miles from a garage. He stopped at a candy store and the problem was quickly solved. How?Correct answer: It was raining heavily and the man discovered a leak in the roof of his car. He bought several packs of chewing gum, chewed them, and used the gum as waterproof filler until he could reach a garage.ChatGPT’s answer: He used something like sugar to fix the problem.5. A politician made a speech that saved his life even before he gave it. How?Correct answer: The politician was shot in the chest. He was saved because the bullet was slowed down as it passed through the folded manuscript of his speech in his breast pocket.ChatGPT’s answer: The speech was cancelled, preventing the assassination.6. A man went around the world in a ship but was always inside land. How?Correct answer: He was an astronaut in a spaceship orbiting Earth.ChatGPT’s answer: He sailed on inland waters.7. A healthy man got dressed, lay down, and died. Why?Correct answer: The last thing he put on was his shoe and it contains a deadly spider that bit him and he died soon after.ChatGPT’s answer: He was being executed.8. A man writes the same number on 20 sheets of paper. Why?Correct answer: It’s January, and he is writing the date of the new year on all the checks in the checkbook to avoid putting last year’s date by mistake.ChatGPT’s answer: He was numbering the pages.9. A man suggested a way for a toothpaste company to increase sales. How?Correct answer: The man suggested that they make the hole in the top of the tube bigger so that more toothpaste would be squeezed out each time.ChatGPT’s answer: Make the hole larger so more toothpaste is used.10. A man spent a winter night above the snow line with no shelter and survived. How?Correct answer: The man was marooned on a volcano that had recently erupted. He was kept alive by the heat of the melting lava.ChatGPT’s answer: He stayed awake and moving.11. How did a man visit 30 countries in one day?Correct answer: The man was a diplomatic courier, and he visited the embassies of 30 countries, all located in Washington. In law, an embassy is part of the country of the embassy, and not part of the country in which it is situated.ChatGPT’s answer: He was airline crew.12. Why does a woman always use a square pencil at work?Correct answer: The woman is a carpenter, who works on scaffolding at a building site. A conventional brown pencil is more likely to roll off and fall.ChatGPT’s answer: She is a dressmaker; it won’t roll off a table.13. Why did rich people pour tea before milk in England, while poor people did the opposite?Correct answer: Which people had bone china that could take the hot tea, but poor people have cheap crockery that would crack of hot tea were poured into it. Pouring the tea first became a sign of prosperity.ChatGPT’s answer: Poor cups cracked if hot tea was poured first.14. A man shot himself after seeing a stone with a drop of blood on it. Why?Correct answer: The man was lost in the desert. Without landmarks, emotes stones with a drop of blood from a cut on his hand. After a few days of walking and out of water, he found a stone with blood on it. He knew that he was walking in circles, and he shot himself rather than facing a slower death.ChatGPT’s answer: He realized the stone marked a murder.15. A man ordered an omelette at an inn and was arrested and executed. Why?Correct answer: The man was an aristocrat on the run from the French revolution. He disguised himself as a person. When he ordered an omelette, he was asked how many eggs he wanted in it. He replied a dozen. No peasant would have asked for more than two or three.ChatGPT’s answer: He recognized human flesh.16. How did an archaeologist deduce Romans drove on the left?Correct answer: The archaeologist was excavating a Roman quarry. The ruts in the road, leading from the quarry were much deeper on the left then, on the right. Since the carts leaving, the quarry were much heavier than those returning, he deduce that the Romans drove on the left side of the road.ChatGPT’s answer: Wheel ruts showed greater wear on the left.17. A man died because his watch stopped. Why?Correct answer: The man is a vampire, who leaves his house for his nightly drink of blood. However, his watch is stopped, and what he thinks his night is actually a solar eclipse. He is caught in the sunlight and dies.ChatGPT’s answer: His watch was a heart monitor.18. A man used two forks and one knife at lunch and didn’t pay. Why?Correct answer: The man was a piano tuner who had come to tune the piano in the restaurant. He brought his own tuning fork. The restaurant tour repay the service with a free lunch.ChatGPT’s answer: He was practicing for a waiter’s job.19. A woman came home exhausted after her first day at a new job. What happened?Correct answer: The woman had been told to clean the elevators in a skyscraper. She had cleaned the same elevator on each floor.ChatGPT’s answer: She misunderstood the job and did exhausting work.20. A fingerprint-free ransom note proved a kidnapper’s guilt. How?Correct answer: The police were able to get a DNA trace from the saliva on the back of the stamp.ChatGPT’s answer: The paper could be traced back to him.Why AI struggles with lateral thinkingAI is not bad at reasoning. In fact, it is exceptionally good at many forms of reasoning. The problem with lateral thinking goes deeper than simple error — it is structural.Lateral puzzles are designed to violate default interpretations. They rely on the solver noticing and discarding assumptions about what a “man,” a “job,” a “country,” or an “object” usually means. AI, however, is trained to do the opposite. It learns statistical regularities in language and tends to select the most common, socially reinforced interpretation of a situation. When faced with ambiguity, it does not ask which assumption should be broken; it asks which answer is most typical.Human lateral thinkers constantly interrogate their own understanding. Could this be metaphorical? Occupational? Technical? Misleading by design? AI does not spontaneously do this. It commits early to a frame and then reasons fluently within it, producing answers that are coherent, plausible, and often wrong.There is also a tension between narrative smoothness and real-world constraint. Many correct lateral answers are oddly specific and slightly awkward. Many incorrect answers sound smoother and more realistic. Language models are optimized for plausibility, not for the uncomfortable realization that a story that sounds right might still be wrong.Finally, lateral thinking often requires goal-shifting: realizing that the question itself is misleading. AI assumes the problem is well-posed and that success means filling an answer slot. Humans notice when the question is a trap.A final thoughtThese puzzles don’t show that AI is unintelligent. They show that intelligence and lateral thinking are not the same thing. A system can be extraordinarily good at reasoning within a framework and still struggle when the real challenge is realizing the framework itself is wrong.As AI becomes more fluent, confident, and persuasive, this distinction matters more, not less. The danger is not that AI gives bad answers — it’s that it gives reasonable ones when reason itself is the wrong tool.So the next time an AI responds smoothly and confidently, it’s worth pausing to ask the same question these puzzles demand of us:What assumption is hiding in plain sight — and who, human or machine, is more likely to notice it?_______________________________________________________________Press enter or click to view image in full sizeBooks by Alexander AtkinsIf you enjoyed this essay, you will enjoy my two books, Serendipitous Discoveries from the Dictionary written for people who love words and Serendipitous Discoveries from the Bookshelf written for people who love books and literature. You will also enjoy my blog Atkins Bookshelf which explores the world of ideas — through books, movies, music, quotations, and the English language — for the intellectually curious. At the heart of Atkins Bookshelf is a lifelong love of books, literature, and the English language; its goal is to educate, entertain, and inspire.This post is a labor of love. If you enjoyed reading the essay, please consider supporting the research and writing for future posts via Buy Me A Coffee. Thank you so much for your support.To learn more about Alexander Atkins Design, visit https://alexatkinsdesign.com/


已Opublikowany: 2026-02-03 21:41:00

źródło: alex-65670.medium.com