<< Back

Quick and dirty summary

The effects of current-stage AI can be viewed through an economic lens so we can see things like the value of complements like human judgement and data will go up. White collar jobs are surprisingly at risk.

Notebook for Prediction Machines: The Simple Economics of Artificial Intelligence Agrawal, Ajay Citation (APA): Agrawal, A. (2018). Prediction Machines: The Simple Economics of Artificial Intelligence [Kindle Android version]. Retrieved from Amazon.com 1. Introduction: Machine Intelligence Highlight (yellow) - Location 118 AI is a prediction technology, predictions are inputs to decision making, and economics provides a perfect framework for understanding the trade-offs underlying any decision. Highlight (yellow) - Location 121 Our first key insight is that the new wave of artificial intelligence does not actually bring us intelligence but instead a critical component of intelligence—prediction. Note - Location 122 Tbh, I think inference is a more apt word. Highlight (yellow) - Location 144 Prediction Machines is not a recipe for success in the AI economy. Instead, we emphasize trade-offs. More data means less privacy. More speed means less accuracy. More autonomy means less control. 2. Cheap Changes Everything Highlight (yellow) - Location 170 Stephen Hawking emphatically explained, “[E]verything that civilisation has to offer is a product of human intelligence … [S]uccess in creating AI would be the biggest event in human history.”1 Highlight (yellow) - Location 212 The rise of the internet was a drop in the cost of distribution, communication, and search. Reframing a technological advance as a shift from expensive to cheap or from scarce to abundant is invaluable for thinking about how it will affect your business. Highlight (yellow) - Location 227 Technological change makes things cheap that were once expensive. The cost of light fell so much that it changed our behavior from thinking about whether we should use it to not thinking for even a second before flipping on a light switch. Such significant price drops create opportunities to do things we’ve never done; it can make the impossible possible. So, economists are unsurprisingly obsessed with the implications of massive price drops in foundational inputs like light. Highlight (yellow) - Location 246 Reducing something to pure cost terms has a way of cutting through hype, although it does not help make the latest and greatest technology seem exciting. You’d never have seen Steve Jobs announce “a new adding machine,” even though that is all he ever did. By reducing the cost of something important, Jobs’s new adding machines were transformative. Highlight (yellow) - Location 257 What will new AI technologies make so cheap? Prediction. Therefore, as economics tells us, not only are we going to start using a lot more prediction, but we are going to see it emerge in surprising new places. Highlight (yellow) - Location 289 when an input such as prediction becomes cheap, this can enhance the value of other things. Economists call these “complements.” Just as a drop in the cost of coffee increases the value of sugar and cream, for autonomous vehicles, a drop in the cost of prediction increases the value of sensors to capture data on the vehicle’s surroundings. Highlight (yellow) - Location 293 When prediction is cheap, there will be more prediction and more complements to prediction. Highlight (yellow) - Location 307 The prediction becomes sufficiently accurate that it becomes more profitable for Amazon to ship you the goods that it predicts you will want rather than wait for you to order them. Highlight (yellow) - Location 323 Amazon obtained a US patent for “anticipatory shipping” in 2013.10 Highlight (yellow) - Location 338 Prediction facilitates decisions by reducing uncertainty, while judgment assigns value. In economists’ parlance, judgment is the skill used to determine a payoff, utility, reward, or profit. The most significant implication of prediction machines is that they increase the value of judgment. Highlight (yellow) - Location 362 The drop in the cost of prediction will impact the value of other things, increasing the value of complements (data, judgment, and action) and diminishing the value of substitutes (human prediction). Highlight (yellow) - Location 363 Organizations can exploit prediction machines by adopting AI tools to assist with executing their current strategy. When those tools become powerful, they may motivate changing the strategy itself. Part One: Prediction Highlight (yellow) - 3. Prediction Machine Magic > Location 389 PREDICTION is the process of filling in missing information. Prediction takes information you have, often called “data,” and uses it to generate information you don’t have. Highlight (yellow) - 3. Prediction Machine Magic > Location 440 the change from 98 percent to 99.9 percent has been transformational. The change from 98 percent to 99.9 percent might seem incremental, but small changes are meaningful if mistakes are costly. An improvement from 85 percent to 90 percent accuracy means that mistakes fall by one-third. An improvement from 98 percent to 99.9 percent means mistakes fall by a factor of twenty. An improvement of twenty no longer seems incremental. Note - 3. Prediction Machine Magic > Location 444 Critical mass Highlight (yellow) - 4. Why It’s Called Intelligence > Location 509 What does regression do? It finds a prediction based on the average of what has occurred in the past. Highlight (yellow) - 4. Why It’s Called Intelligence > Location 514 “the conditional average.” For instance, if you live in northern California, you may have past knowledge that the likelihood of rain depends on the season—low in the summer and high in the winter. If you observe that during the winter, the probability of rain on any given day is 25 percent, while during the summer, it is 5 percent, you would not assess that the probability of rain tomorrow is the average—15 percent. Why? Because you know whether tomorrow is winter or summer, so you would condition your assessment accordingly. Highlight (yellow) - 4. Why It’s Called Intelligence > Location 523 Before machine learning, multivariate regression provided an efficient way to condition on multiple things, without the need to calculate dozens, hundreds, or thousands of conditional averages. Regression takes the data and tries to find the result that minimizes prediction mistakes, maximizing what is called “goodness of fit.” Highlight (yellow) - 4. Why It’s Called Intelligence > Location 526 Regression minimizes prediction mistakes on average and punishes large errors more than small ones. Highlight (yellow) - 4. Why It’s Called Intelligence > Location 537 Being precisely perfect on average can mean being actually wrong each time. Regression can keep missing several feet to the left or several feet to the right. Even if it averages out to the correct answer, regression can mean never actually hitting the target. Highlight (yellow) - 4. Why It’s Called Intelligence > Location 538 Unlike regression, machine learning predictions might be wrong on average, but when the predictions miss, they often don’t miss by much. Statisticians describe this as allowing some bias in exchange for reducing variance. Highlight (yellow) - 4. Why It’s Called Intelligence > Location 599 Many problems have transformed from algorithmic problems (“what are the features of a cat?”) to prediction problems (“does this image with a missing label have the same features as the cats I have seen before?”). Machine learning uses probabilistic models to solve problems. Highlight (yellow) - 4. Why It’s Called Intelligence > Location 604 In his book On Intelligence, Jeff Hawkins was among the first to argue that prediction is the basis for human intelligence. The essence of his theory is that human intelligence, which is at the core of creativity and productivity gains, is due to the way our brains use memories to make predictions: “We are making continuous low-level predictions in parallel across all our senses. But that’s not all. I am arguing a much stronger proposition. Prediction is not just one of the things your brain does. It is the primary function of the neocortex, and the foundation of intelligence. The cortex is an organ of prediction.” Highlight (yellow) - 4. Why It’s Called Intelligence > Location 610 As we develop and mature, our brains’ predictions are increasingly accurate; the predictions often come true. However, when predictions do not accurately predict the future, we notice the anomaly, and this information is fed back into our brain, which updates its algorithm, thus learning and further enhancing the model. Highlight (yellow) - 4. Why It’s Called Intelligence > Location 613 Hawkins’s work is controversial. His ideas are debated in the psychology literature, and many computer scientists flatly reject his emphasis on the cortex as a model for prediction machines. Highlight (yellow) - 4. Why It’s Called Intelligence > Location 631 Machine learning science had different goals from statistics. Whereas statistics emphasized being correct on average, machine learning did not require that. Instead, the goal was operational effectiveness. Predictions could have biases so long as they were better (something that was possible with powerful computers). This gave scientists a freedom to experiment and drove rapid improvements that take advantage of the rich data and fast computers that appeared over the last decade. Highlight (yellow) - 5. Data Is the New Oil > Location 650 Prediction machines rely on data. More and better data leads to better predictions. In economic terms, data is a key complement to prediction. It becomes more valuable as prediction becomes cheaper. Highlight (yellow) - 5. Data Is the New Oil > Location 729 Data scientists have excellent tools for assessing the amount of data required given the expected reliability of the prediction and the need for accuracy. These tools are called “power calculations” and tell you how many units you need to analyze to generate a useful prediction. Highlight (yellow) - 5. Data Is the New Oil > Location 765 Some have argued that more data about unique factors brings disproportionate rewards in the market.6 Increasing data brings disproportionate rewards in the market. Thus, from an economic point of view, in such cases data may have increasing returns to scale. Highlight (yellow) - 6. The New Division of Labor > Location 791 Adam Smith’s eighteenth-century economic thinking on the division of labor that involves allocating roles based on relative strengths. Here, the division of labor is between humans and machines in generating predictions. Highlight (yellow) - 6. The New Division of Labor > Location 802 humans are poor statisticians, even in situations when they are not too bad at assessing probabilities. Highlight (yellow) - 6. The New Division of Labor > Location 805 When they told people to consider two hospitals—one with forty-five births per day and another with fifteen births per day—and asked which hospital would have more days when 60 percent or more of the babies born are boys, very few gave the correct answer—the smaller hospital. Highlight (yellow) - 6. The New Division of Labor > Location 810 the smaller hospital—precisely because it has fewer births—is more likely to have more extreme outcomes away from the average. Highlight (yellow) - 6. The New Division of Labor > Location 821 Kahneman concludes that if there is a way of predicting using a formula instead of a human, the formula should be considered seriously. Highlight (yellow) - 6. The New Division of Labor > Location 842 for the 1 percent of defendants that the machine classified as riskiest, it predicted that 62 percent would commit crimes while out on bail. Nevertheless, the human judges (who did not have access to the machine predictions) opted to release almost half of them. The machine predictions were reasonably accurate, with 63 percent of the machine-identified high-risk offenders actually committing a crime while on bail and over half not appearing at the next court date. Five percent of those the machine identified as high risk committed rape or murder while on bail. Note - 6. The New Division of Labor > Location 846 Minority report Highlight (yellow) - 6. The New Division of Labor > Location 858 In a study of hiring across fifteen low-skilled service firms, Mitchell Hoffman, Lisa Kahn, and Danielle Li found that when the firms used an objective and verifiable test along with normal interviews, there was a 15 percent bump in the job tenure of hires relative to when they made hiring decisions based on interviews alone.8 For these jobs, managers were instructed to maximize tenure. Highlight (yellow) - 6. The New Division of Labor > Location 862 when the discretion of hiring managers was restricted—preventing managers from overruling test scores when those scores were unfavorable—an even higher job tenure and a reduced quit rate occurred. So, even when instructed to maximize tenure, when experienced at hiring, and when given fairly accurate machine predictions, the managers still made poor predictions. Highlight (yellow) - 6. The New Division of Labor > Location 920 “reverse causality.” You are reading this book because you already use prediction machines or have definite plans to do so in the near future. The book didn’t cause the technology adoption; instead, the (perhaps pending) technology adoption caused you to read this book. Highlight (yellow) - 6. The New Division of Labor > Location 973 In 2016, a Harvard/MIT team of AI researchers won the Camelyon Grand Challenge, a contest that produces computer-based detection of metastatic breast cancer from slides of biopsies. The team’s winning deep-learning algorithm made the correct prediction 92.5 percent of the time compared with a human pathologist whose performance was at 96.6 percent. While this seemed like a victory for humanity, the researchers went further and combined the predictions of their algorithm and a pathologist’s. The result was an accuracy of 99.5 percent.20 That is, the human error rate of 3.4 percent fell to just 0.5 percent. Errors fell by 85 percent. Highlight (yellow) - 6. The New Division of Labor > Location 988 The theory is that humans who must answer for why their prediction differed from an objective algorithm might only overrule machines if they put in extra effort to ensure they are sufficiently confident. Highlight (yellow) - 6. The New Division of Labor > Location 1010 many human-machine collaborations will take the form of “prediction by exception.” Highlight (yellow) - 6. The New Division of Labor > Location 1011 prediction machines learn when data is plentiful, which happens when they are dealing with more routine or frequent scenarios. In these situations, the prediction machine operates without the human partner expending attention. By contrast, when an exception arises—a scenario that is non-routine—it is communicated to the human, and then the human puts in more effort to improve and verify the prediction. This “prediction by exception” is precisely what happened with the Colombian bank loan committee. Highlight (yellow) - 6. The New Division of Labor > Location 1030 Humans, including professional experts, make poor predictions under certain conditions. Humans often overweight salient information and do not account for statistical properties. Many scientific studies document these shortcomings across a wide variety of professions. The phenomenon was illustrated in the feature film Moneyball. Highlight (yellow) - 6. The New Division of Labor > Location 1033 Machines and humans have distinct strengths and weaknesses in the context of prediction. As prediction machines improve, businesses must adjust their division of labor between humans and machines in response. Prediction machines are better than humans at factoring in complex interactions among different indicators, especially in settings with rich data. As the number of dimensions for such interactions grows, the ability of humans to form accurate predictions diminishes, especially relative to machines. However, humans are often better than machines when understanding the data generation process confers a prediction advantage, especially in settings with thin data. Highlight (yellow) - 6. The New Division of Labor > Location 1039 Prediction machines scale. The unit cost per prediction falls as the frequency increases. Human prediction does not scale the same way. However, humans have cognitive models of how the world works and thus can make predictions based on small amounts of data. Thus, we anticipate a rise in human prediction by exception whereby machines generate most predictions because they are predicated on routine, regular data, but when rare events occur the machine recognizes that it is not able to produce a prediction with confidence, and so calls for human assistance. Part Two: Decision Making Highlight (yellow) - 7. Unpacking Decisions > Location 1088 As machine prediction increasingly replaces the predictions that humans make, the value of human prediction will decline. Highlight (yellow) - 7. Unpacking Decisions > Location 1089 judgment, data, and action—remain, for now, firmly in the realm of humans. They are complements to prediction, meaning they increase in value as prediction becomes cheap. Highlight (yellow) - 7. Unpacking Decisions > Location 1156 human prediction, will decline. However, the value of complements, such as the human skills associated with data collection, judgment, and actions, will become more valuable. Highlight (yellow) - 8. The Value of Judgment > Location 1170 Prediction machines don’t provide judgment. Only humans do, because only humans can express the relative rewards from taking different actions. Highlight (yellow) - 8. The Value of Judgment > Location 1173 With better prediction come more opportunities to consider the rewards of various actions—in other words, more opportunities for judgment. And that means that better, faster, and cheaper prediction will give us more decisions to make. Note - 8. The Value of Judgment > Location 1175 So as a human, improve your decision making, critical thinking and heuristics in judgment to complement prediction machines. Highlight (yellow) - 8. The Value of Judgment > Location 1234 Humans experience the cognitive costs of judgment as a slower decision-making process. We all have to decide how much we want to pin down the payoffs against the costs of delaying a decision. Highlight (yellow) - 8. The Value of Judgment > Location 1241 Individuals might take different actions in the same circumstances and learn what the reward actually is. Highlight (yellow) - 8. The Value of Judgment > Location 1243 experimentation necessarily means making what you will later regard as mistakes, experiments also have costs. Highlight (yellow) - 8. The Value of Judgment > Location 1249 before Google acquired it, the Israeli startup Waze generated accurate traffic maps by tracking the routes drivers chose. It then used that information to provide efficient optimization of the quickest path between two points, taking into account the information it had from drivers as well as continual monitoring of traffic. It could also forecast how traffic conditions might evolve if you were traveling farther and could offer new, more efficient paths on route if conditions changed. Highlight (yellow) - 8. The Value of Judgment > Location 1290 uncertainty increases the cost of judging the payoffs for a given decision. Highlight (yellow) - 8. The Value of Judgment > Location 1298 reward function engineering, the job of determining the rewards to various actions, given the predictions that the AI makes. Doing this job well requires an understanding of the organization’s needs and the machine’s capabilities. Highlight (yellow) - 8. The Value of Judgment > Location 1309 Most of us already do some reward function engineering, but for humans, not machines. Parents teach their children values. Mentors teach new workers how the system operates. Managers give objectives to their staff and then tweak them to get better performance. Every day, we make decisions and judge the rewards. Note - 8. The Value of Judgment > Location 1311 Economic incentives. Highlight (yellow) - 8. The Value of Judgment > Location 1330 Prediction machines are a tool for humans. So long as humans are needed to weigh outcomes and impose judgment, they have a key role to play as prediction machines improve. Highlight (yellow) - 8. The Value of Judgment > Location 1332 Prediction machines increase the returns to judgment because, by lowering the cost of prediction, they increase the value of understanding the rewards associated with actions. However, judgment is costly. Figuring out the relative payoffs for different actions in different situations takes time, effort, and experimentation. Note - 8. The Value of Judgment > Location 1335 Insight: while cost of prediction will fall, value of judgment will increase. What that means for future jobs is multi-disciplinary macro-thinkers will be even more highly valued by cutting edge organizations. Mental models like Munger advocates are key. Highlight (yellow) - 8. The Value of Judgment > Location 1337 Under conditions of uncertainty, we need to determine the payoff for acting on wrong decisions, not just right ones. So, uncertainty increases the cost of judging the payoffs for a given decision. Highlight (yellow) - 8. The Value of Judgment > Location 1338 If there are a manageable number of action-situation combinations associated with a decision, then we can transfer the judgment from ourselves to the prediction machine (this is “reward function engineering”) so that the machine can make the decision itself once it generates the prediction. This enables automating the decision. Often, however, there are too many action-situation combinations, such that it is too costly to code up in advance all the payoffs associated with each combination, especially the very rare ones. In these cases, it is more efficient for a human to apply judgment after the prediction machine predicts. Note - 8. The Value of Judgment > Location 1343 Human in the loop. Highlight (yellow) - 9. Predicting Judgment > Location 1389 Humans are a resource, so simple economics suggest they will still do something. The question is more whether the “something” for humans is high or low value, appealing or unappealing. Highlight (yellow) - 9. Predicting Judgment > Location 1391 Prediction relies on data. That means humans have two advantages over machines. We know some things that the machines don’t (yet), and, more importantly, we are better at deciding what to do when there isn’t much data. Highlight (yellow) - 9. Predicting Judgment > Location 1393 Humans have three types of data that machines don’t. First, human senses are powerful. In many ways, human eyes, ears, nose, and skin still surpass machine capabilities. Second, humans are the ultimate arbiters of our own preferences. Consumer data is extremely valuable because it gives prediction machines data about these preferences. Highlight (yellow) - 9. Predicting Judgment > Location 1397 Third, privacy concerns restrict the data available to machines. As long as enough people keep their sexual activity, financial situation, mental health status, and repugnant thoughts to themselves, the prediction machines will have insufficient data to predict many types of behavior. In the absence of good data, our understanding of other humans will provide a role for our judgment skills that machines cannot learn to predict. Highlight (yellow) - 9. Predicting Judgment > Location 1401 Prediction machines may also lack data because some events are rare. If a machine cannot observe enough human decisions, it cannot predict the judgment underlying those decisions. Note - 9. Predicting Judgment > Location 1403 Prediction machines aren't good at predicting black swan events. Highlight (yellow) - 9. Predicting Judgment > Location 1462 Machines are bad at prediction for rare events. Managers make decisions on mergers, innovation, and partnerships without data on similar past events for their firms. Humans use analogies and models to make decisions in such unusual situations. Machines cannot predict judgment when a situation has not occurred many times in the past. Note - 9. Predicting Judgment > Location 1465 So far signals: machines are bad at (1) innovation, (2) changing circumstances which cause the model to become outdated, and (3) predicting rare events. Humans are effective at the above because (1) they are creative, creating things they think have a perceived benefit even if many times it can be wildly off (2) they can adapt to change without the initial long training period of machines (3) they can learn from rare events (say a single example), even if they tend to overfit. Highlight (yellow) - 10. Taming Complexity > Location 1526 More “Ifs” and “Thens” Note - 10. Taming Complexity > Location 1526 Any if then problem can be converted to a machine learning problem given enough training examples. Highlight (yellow) - 10. Taming Complexity > Location 1530 Economics Nobel Prize–winner Herbert Simon called this “satisficing.” Note - 10. Taming Complexity > Location 1531 "decision makers can satisfice either by finding optimum solutions for a simplified world, or by finding satisfactory solutions for a more realistic world. Neither approach, in general, dominates the other, and both have continued to co-exist in the world of management science" Highlight (yellow) - 10. Taming Complexity > Location 1562 Advances in AI mean less need for satisficing and more “ifs” and more “thens.” More complexity with less risk. This transforms decision making by expanding options. Highlight (yellow) - 10. Taming Complexity > Location 1577 We are so used to satisficing in our businesses and in our social lives that it will take practice to imagine the vast array of transformations possible as a result of prediction machines that can handle more “ifs” and “thens” and, thus, more complex decisions in more complex environments. It’s not intuitive for most people to think of airport lounges as a solution to poor prediction and that they will be less valuable in an era of powerful prediction machines. Highlight (yellow) - 11. Fully Automated Decision Making > Location 1646 What do accident prevention and automated sports cameras have in common? In each, there are high returns for quick action responses to predictions and judgment is either codifiable or predictable. Automation occurs when the return to machines handling all functions is greater than the returns to including humans in the process. Highlight (yellow) - 11. Fully Automated Decision Making > Location 1674 What distinguishes the “within factory” environment from the “open road” is the possibility of what economists call “externalities”—costs that are felt by others, rather than the key decision makers. Note - 11. Fully Automated Decision Making > Location 1675 Collateral damage Highlight (yellow) - 11. Fully Automated Decision Making > Location 1698 The introduction of AI to a task does not necessarily imply full automation of that task. Prediction is only one component. In many cases, humans are still required to apply judgment and take an action. However, sometimes judgment can be hard coded or, if enough examples are available, machines can learn to predict judgment. In addition, machines may perform the action. When machines perform all elements of the task, then the task is fully automated and humans are completely removed from the loop. Highlight (yellow) - 11. Fully Automated Decision Making > Location 1701 The tasks most likely to be fully automated first are the ones for which full automation delivers the highest returns. These include tasks where: (1) the other elements are already automated except for prediction (e.g., mining); (2) the returns to speed of action in response to prediction are high (e.g., driverless cars); and (3) the returns to reduced waiting time for predictions are high (e.g., space exploration). Highlight (yellow) - 11. Fully Automated Decision Making > Location 1710 We anticipate a significant wave of policy development concerning the assignment of liability driven by an increasing demand for many new areas of automation. Part Three: Tools Highlight (yellow) - 12. Deconstructing Work Flows > Location 1742 Like classical computing, AI is a general-purpose technology. It has the potential to affect every decision, because prediction is a key input to decision making. Highlight (yellow) - 12. Deconstructing Work Flows > Location 1745 AI is the type of technology that requires rethinking processes in the same way that Hammer and Champy did. Highlight (yellow) - 12. Deconstructing Work Flows > Location 1770 AI tools can change work flows in two ways. First, they can render tasks obsolete and therefore remove them from work flows. Second, they can add new tasks. Highlight (yellow) - 12. Deconstructing Work Flows > Location 1807 by decomposing work flows, businesses can assess whether prediction machines are likely to reach well beyond the individual decisions for which they may have been designed. Highlight (yellow) - 12. Deconstructing Work Flows > Location 1822 they had a keyboard that looked like a small QWERTY keyboard with a substantial tweak. While the image the user saw did not change, the surface area around a particular set of keys expanded when typing. When you type a “t,” it is highly probable the next letter will be an “h” and so the area around that key expanded. Following that, “e” and “i” expanded, and so on. Note - 12. Deconstructing Work Flows > Location 1825 Apple keyboard Highlight (yellow) - 12. Deconstructing Work Flows > Location 1835 Large corporations are comprised of work flows that turn inputs into outputs. Work flows are made up of tasks (e.g., a Goldman Sachs IPO is a work flow comprised of 146 distinct tasks). In deciding how to implement AI, companies will break their work flows down into tasks, estimate the ROI for building or buying an AI to perform each task, rank-order the AIs in terms of ROI, and then start from the top of the list and begin working downward. Sometimes a company can simply drop an AI tool into their work flow and realize an immediate benefit due to increasing the productivity of that task. Often, however, it’s not that easy. Deriving a real benefit from implementing an AI tool requires rethinking, or “reengineering” the entire work flow. As a result, similar to the personal computer revolution, it will take time to see productivity gains from AI in many mainstream businesses. Highlight (yellow) - 12. Deconstructing Work Flows > Location 1841 To illustrate the potential effect of an AI on a work flow, we describe a fictitious AI that predicts the ranking of any MBA application. To derive the full benefit from this prediction machine, the school would have to redesign its work flow. It would need to eliminate the task of manually ranking applications and expand the task of marketing the program, as the AI would increase the returns to a greater applicant pool (better predictions about who will succeed and lower cost of evaluating applications). The school would modify the task of offering incentives like scholarships and financial aid due to increased certainty about who will succeed. Finally, the school would adjust other elements of the work flow to take advantage of being able to provide instantaneous school admission decisions. Highlight (yellow) - 13. Decomposing Decisions > Location 1930 How costly is it to accept a weak student who we wrongly predicted would be among the elite alumni? How costly is it to reject a strong student who we wrongly predicted would be weak? The assessment of that trade-off is “judgment,” an explicit element in the AI canvas. Highlight (yellow) - 13. Decomposing Decisions > Location 1936 Tasks need to be decomposed in order to see where prediction machines can be inserted. This allows you to estimate the benefit of the enhanced prediction and the cost of generating that prediction. Once you have generated reasonable estimates, rank-order the AIs from highest to lowest ROI by starting at the top and working your way down, implementing AI tools as long as the expected ROI makes sense. Highlight (yellow) - 13. Decomposing Decisions > Location 1940 The AI canvas is an aid to help with the decomposition process. Fill out the AI canvas for every decision or task. This introduces discipline and structure into the process. It forces you to be clear about all three data types required: training, input, and feedback. It also forces you to articulate precisely what you need to predict, the judgment required to assess the relative value of different actions and outcomes, the action possibilities, and the outcome possibilities. Highlight (yellow) - 13. Decomposing Decisions > Location 1943 At the center of the AI canvas is prediction. You need to identify the core prediction at the heart of the task, and this can require AI insight. The effort to answer this question often initiates an existential discussion among the leadership team: “What is our real objective, anyhow?” Prediction requires a specificity not often found in mission statements. Highlight (yellow) - 14. Job Redesign > Location 1962 Why didn’t bookkeepers see the spreadsheet as a threat? Because VisiCalc actually made them more valuable. Highlight (yellow) - 14. Job Redesign > Location 1968 They were not replaced but rather augmented with superpowers. Highlight (yellow) - 14. Job Redesign > Location 1980 To automate a task completely, one failed piece can derail the entire exercise. You need to consider every step. Those small tasks may be very difficult missing links in automation and fundamentally constrain how to reformulate jobs. Thus, AI tools that address these missing links can have substantive effects. Highlight (yellow) - 14. Job Redesign > Location 1994 while robots can take an object and move it to a human, someone still needs to do the “picking”—that is, figure out what goes where and then lift the object and move it. The last bit is most challenging because of just how difficult grasping actually is. Highlight (yellow) - 14. Job Redesign > Location 2070 five clear roles for humans in the use of medical images will remain, at least in the short and medium term: choosing the image, using real-time images in medical procedures, interpreting machine output, training machines on new technologies, and employing judgment that may lead to overriding the prediction machine’s recommendation, perhaps based on information unavailable to the machine. Highlight (yellow) - 14. Job Redesign > Location 2087 automation that eliminates a human from a task does not necessarily eliminate them from a job. Highlight (yellow) - 14. Job Redesign > Location 2101 A job is a collection of tasks. When breaking down a work flow and employing AI tools, some tasks previously performed by humans may be automated, the ordering and emphasis of remaining tasks may change, and new tasks may be created. Thus, the collection of tasks that make up a job can change. Highlight (yellow) - 14. Job Redesign > Location 2104 The implementation of AI tools generates four implications for jobs: AI tools may augment jobs, as in the example of spreadsheets and bookkeepers. AI tools may contract jobs, as in fulfillment centers. AI tools may lead to the reconstitution of jobs, with some tasks added and others taken away, as with radiologists. AI tools may shift the emphasis on the specific skills required for a particular job, as with school bus drivers. Highlight (yellow) - 14. Job Redesign > Location 2109 AI tools may shift the relative returns to certain skills and, thus, change the types of people who are best suited to particular jobs. In the case of bookkeepers, the arrival of the spreadsheet diminished the returns to being able to perform many calculations quickly on a calculator. At the same time, it increased the returns to being good at asking the right questions in order to fully take advantage of the technology’s ability to efficiently run scenario analyses. Part Four: Strategy Highlight (yellow) - 15. AI in the C-Suite > Location 2226 To make the most of prediction machines, you need to rethink the reward functions throughout your organization to better align with your true goals. Highlight (yellow) - 15. AI in the C-Suite > Location 2227 Beyond recruiting, the marketing of the team needs to change, perhaps to deemphasize individual performance. Highlight (yellow) - 15. AI in the C-Suite > Location 2234 like oil, data has different grades. Highlight (yellow) - 15. AI in the C-Suite > Location 2236 Training data is used at the beginning to train an algorithm, but once the prediction machine is running, it is not useful anymore. It is as if you have burned it. Your past data on yogurt sales has little value once you have a prediction machine built on it. Highlight (yellow) - 15. AI in the C-Suite > Location 2247 An AI innovator who offers prediction machines for yogurt demand could do well, but would have to deal with a supermarket chain in order to create any value. Only the supermarket chain can take the action that stocks yogurt or not. And without that action, the prediction machine for yogurt demand has no value. Highlight (yellow) - 15. AI in the C-Suite > Location 2285 Prediction machines will increase the value of complements, including judgment, actions, and data. Highlight (yellow) - 15. AI in the C-Suite > Location 2286 The increasing value of judgment may lead to changes in organizational hierarchy—there may be higher returns to putting different roles or different people in positions of power. Highlight (yellow) - 15. AI in the C-Suite > Location 2287 prediction machines enable managers to move beyond optimizing individual components to optimizing higher-level goals and thus make decisions closer to the objectives of the organization. Highlight (yellow) - 16. When AI Transforms Your Business > Location 2361 According to the Bureau of Labor Statistics, tellers were not being automated out of a job (see figure 16-1). However, they were automated out of the bank-telling task. Tellers ended up becoming the marketing and customer service agents for bank products beyond the collection and dispensing of cash. Highlight (yellow) - 16. When AI Transforms Your Business > Location 2386 humans are critical to decision making where the goals are subjective. Highlight (yellow) - 16. When AI Transforms Your Business > Location 2388 AI will have an impact on labor that is different from its impact on capital. The importance of judgment means that employee contracts need to be more subjective. Highlight (yellow) - 16. When AI Transforms Your Business > Location 2392 Prediction and judgment are complements, so better prediction increases the demand for judgment, meaning that your employees’ main role will be to exercise judgment in decision making. Highlight (yellow) - 16. When AI Transforms Your Business > Location 2434 Unique data is important for creating strategic advantage. If data is not unique, it is hard to build a business around prediction machines. Without data, there is no real pathway to learning, so AI is not core to your strategy. Highlight (yellow) - 16. When AI Transforms Your Business > Location 2457 A key strategic choice is determining where your business ends and another business begins—deciding on the boundary of the firm Highlight (yellow) - 16. When AI Transforms Your Business > Location 2460 By reducing uncertainty, prediction machines increase the ability to write contracts, and thus increase the incentive for companies to contract out both capital equipment and labor that focuses on data, prediction, and action. However, prediction machines decrease the incentive for companies to contract out labor that focuses on judgment. Judgment quality is hard to specify in a contract and difficult to monitor. Highlight (yellow) - 16. When AI Transforms Your Business > Location 2465 AI will increase incentives to own data. Still, contracting out for data may be necessary when the predictions that the data provides are not strategically essential to your organization. In such cases, it may be best to purchase predictions directly rather than purchase data and then generate your own predictions. Highlight (yellow) - 17. Your Learning Strategy > Location 2492 The economist’s filter knows that any statement of “we will put our attention into X” means a trade-off. Highlight (yellow) - 17. Your Learning Strategy > Location 2494 AI-first means devoting resources to data collection and learning (a longer-term objective) at the expense of important short-term considerations such as immediate customer experience, revenue, and user numbers. Highlight (yellow) - 17. Your Learning Strategy > Location 2527 Learning-by-using is a term that economic historian Nathan Rosenberg coined to describe the phenomenon whereby firms improve their product design through interactions with users. Highlight (yellow) - 17. Your Learning Strategy > Location 2630 Companies must trade off how quickly they should use a prediction machine’s experience in the real world to generate new predictions. Use that experience immediately and the AI adapts more quickly to changes in local conditions, but at the cost of quality assurance. Highlight (yellow) - 17. Your Learning Strategy > Location 2673 If the machines get the experience, then the humans might not. Highlight (yellow) - 17. Your Learning Strategy > Location 2702 An AI-first strategy places maximizing prediction accuracy as the central goal of the organization, even if that means compromising on other goals such as maximizing revenue, user numbers, or user experience. Note - 17. Your Learning Strategy > Location 2704 Really? Not as a means to an end? I guess that's why it's "first" Highlight (yellow) - 18. Managing AI Risk > Location 2759 “AI neuroscience.”6 A key tool is to hypothesize what might drive the differences, provide the AI with different input data that tests the hypothesis, and then compare the resulting predictions. Lambrecht and Tucker did this when they discovered that women saw fewer STEM ads because it was less expensive to show the ad to men. The point is that the black box of AI is not an excuse to ignore potential discrimination or a way to avoid using AI in situations where discrimination might matter. Highlight (yellow) - 18. Managing AI Risk > Location 2799 University of Washington researchers showed that Google’s new algorithm for detecting video content could be fooled into misclassifying videos by inserting random images for fractions of a second.8 For example, you can trick an AI into misclassifying a video of a zoo by inserting images of cars for such a short time that a human would never see the cars, but the computer could. Highlight (yellow) - 18. Managing AI Risk > Location 2813 AI technologies will develop hand-in-hand with identity verification. Highlight (yellow) - 18. Managing AI Risk > Location 2817 Ecologists have taught us that homogenous populations are at greater risk of disease and destruction.9 A classic example is in farming. If all farmers in a region or country plant the same strain of a particular crop, they might do better in the short term. They likely chose that crop because it grows particularly well in the region. By adopting the best strain, they reduce their individual risk. However, this very homogeneity presents an opportunity for disease or even adverse climate conditions. If all farmers plant the same strain, then they are all vulnerable to the same disease. The chances of a disastrous widespread crop failure increase. Such monoculture can be individually beneficial but increase system-wide risk. Highlight (yellow) - 18. Managing AI Risk > Location 2823 If one prediction machine system proves itself particularly useful, then you might apply that system everywhere in your organization or even the world. All cars might adopt whatever prediction machine appears safest. That reduces individual-level risk and increases safety; however, it also expands the chance of a massive failure, whether purposeful or not. If all cars have the same prediction algorithm, an attacker might be able to exploit that algorithm, manipulate the data or model in some way, and have all cars fail at the same time. Highlight (yellow) - 18. Managing AI Risk > Location 2835 benefits of implementing prediction on the ground rather than in the cloud for the purpose of faster context-dependent learning (at the cost of more accurate predictions overall) and to protect consumer privacy. Prediction on the ground has another benefit. If the device is not connected to the cloud, a simultaneous attack becomes difficult.11 While training the prediction machine likely happens in the cloud or elsewhere, once the machine is trained, it may be possible to do predictions directly on the device without sending information back to the cloud. Highlight (yellow) - 18. Managing AI Risk > Location 2856 In 2016, computer science researchers showed that certain deep-learning algorithms are particularly vulnerable to such imitation.15 They tested this possibility on some important machine-learning platforms (including Amazon Machine Learning) and demonstrated that with a relatively small number of queries (650–4,000), they could reverse-engineer those models to a very close approximation, sometimes perfectly. Highlight (yellow) - 18. Managing AI Risk > Location 2863 attacks leave a trail. It is necessary to query the prediction machine many times to understand it. Unusual quantities of queries or an unusual diversity of queries should raise red flags. Highlight (yellow) - 18. Managing AI Risk > Location 2886 AI carries many types of risk. We summarize six of the most salient types here. Highlight (yellow) - 18. Managing AI Risk > Location 2887 Predictions from AIs can lead to discrimination. Even if such discrimination is inadvertent, it creates liability. Highlight (yellow) - 18. Managing AI Risk > Location 2888 AIs are ineffective when data is sparse. This creates quality risk, particularly of the “unknown known” type, in which a prediction is provided with confidence, but is false. Highlight (yellow) - 18. Managing AI Risk > Location 2890 Incorrect input data can fool prediction machines, leaving their users vulnerable to attack by hackers. Highlight (yellow) - 18. Managing AI Risk > Location 2891 Just as in biodiversity, the diversity of prediction machines involves a trade-off between individual- and system-level outcomes. Less diversity may benefit individual-level performance, but increase the risk of massive failure. Highlight (yellow) - 18. Managing AI Risk > Location 2893 Prediction machines can be interrogated, exposing you to intellectual property theft and to attackers who can identify weaknesses. Highlight (yellow) - 18. Managing AI Risk > Location 2894 Feedback can be manipulated so that prediction machines learn destructive behavior. Part Five: Society Highlight (yellow) - 19. Beyond Business > Location 2910 Wisdom is breadth. Wisdom is not having too narrow a view. That is the essence of wisdom; it’s broad framing. Highlight (yellow) - 19. Beyond Business > Location 2940 Decades of research into the effects of trade show that other jobs will appear, and overall employment will not plummet. Highlight (yellow) - 19. Beyond Business > Location 3001 Technology-based monopolies are temporary due to a process that economist Joseph Schumpeter called “the gale of creative destruction.” Highlight (yellow) - 19. Beyond Business > Location 3141 The first trade-off is productivity versus distribution. Many have suggested that AI will make us poorer or worse off. That’s not true. Economists agree that technological advance makes us better off and enhances productivity. AI will unambiguously enhance productivity. The problem isn’t wealth creation; it’s distribution. AI might exacerbate the income inequality problem for two reasons. First, by taking over certain tasks, AIs might increase competition among humans for the remaining tasks, lowering wages and further reducing the fraction of income earned by labor versus the fraction earned by the owners of capital. Second, prediction machines, like other computer-related technologies, may be skill-biased such that AI tools disproportionately enhance the productivity of highly skilled workers. Highlight (yellow) - 19. Beyond Business > Location 3147 The second trade-off is innovation versus competition. Like most software-related technologies, AI has scale economies. Furthermore, AI tools are often characterized by some degree of increasing returns: better prediction accuracy leads to more users, more users generate more data, and more data leads to better prediction accuracy. Businesses have greater incentives to build prediction machines if they have more control, but, along with scale economies, this may lead to monopolization. Faster innovation may benefit society from a short-term perspective but may not be optimal from a social or longer-term perspective. Highlight (yellow) - 19. Beyond Business > Location 3151 The third trade-off is performance versus privacy. AIs perform better with more data. In particular, they are better able to personalize their predictions if they have access to more personal data. The provision of personal data will often come at the expense of reduced privacy. Some jurisdictions, like Europe, have chosen to create an environment that provides their citizens with more privacy. That may benefit their citizens and may even create conditions for a more dynamic market for private information where individuals can more easily decide whether they wish to trade, sell, or donate their private data. On the other hand, that may create frictions in settings where opting in is costly and disadvantages European firms and citizens in markets where AIs with better access to data are more competitive.

{{line.content}}

{{line.content}}

{{line.content}}
{{line.content}}

{{line.content}}