Edited, memorised or added to reading queue

on 15-Oct-2016 (Sat)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

Substitution vs complement
#zeroto1
Fifteen years ago, American workers were worried about competition from cheaper Mexican substitutes. And that made sense, because humans really can substitute for each other. Today people think they can hear Ross Perot’s “giant sucking sound” once more, but they trace it back to server farms somewhere in Texas instead of cut-rate factories in Tijuana. Americans fear technology in the near future because they see it as a replay of the globalization of the near past. But the situations are very different: people compete for jobs and for resources; computers compete for neither. Globalization Means Substitution When Perot warned about foreign competition, both George H. W. Bush and Bill Clinton preached the gospel of free trade: since every person has a relative strength at some particular job, in theory the economy maximizes wealth when people specialize according to their advantages and then trade with each other. In practice, it’s not unambiguously clear how well free trade has worked, for many workers at least. Gains from trade are greatest when there’s a big discrepancy in comparative advantage, but the global supply of workers willing to do repetitive tasks for an extremely small wage is extremely large. People don’t just compete to supply labor; they also demand the same resources. While American consumers have benefited from access to cheap toys and textiles from China, they’ve had to pay higher prices for the gasoline newly desired by millions of Chinese motorists. Whether people eat shark fins in Shanghai or fish tacos in San Diego, they all need food and they all need shelter. And desire doesn’t stop at subsistence—people will demand ever more as globalization continues. Now that millions of Chinese peasants can finally enjoy a secure supply of basic calories, they want more of them to come from pork instead of just grain. The convergence of desire is even more obvious at the top: all oligarchs have the same taste in Cristal, from Petersburg to Pyongyang.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




Complementary
#zeroto1
Technology Means Complementarity Now think about the prospect of competition from computers instead of competition from human workers. On the supply side, computers are far more different from people than any two people are different from each other: men and machines are good at fundamentally different things. People have intentionality—we form plans and make decisions in complicated situations. We’re less good at making sense of enormous amounts of data. Computers are exactly the opposite: they excel at efficient data processing, but they struggle to make basic judgments that would be simple for any human. To understand the scale of this variance, consider another of Google’s computer-for-human substitution projects. In 2012, one of their supercomputers made headlines when, after scanning 10 million thumbnails of YouTube videos, it learned to identify a cat with 75% accuracy. That seems impressive—until you remember that an average four-year-old can do it flawlessly. When a cheap laptop beats the smartest mathematicians at some tasks but even a supercomputer with 16,000 CPUs can’t beat a child at others, you can tell that humans and computers are not just more or less powerful than each other—they’re categorically different.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




Complementary
#zeroto1
The stark differences between man and machine mean that gains from working with computers are much higher than gains from trade with other people. We don’t trade with computers any more than we trade with livestock or lamps. And that’s the point: computers are tools, not rivals. The differences are even deeper on the demand side. Unlike people in industrializing countries, computers don’t yearn for more luxurious foods or beachfront villas in Cap Ferrat; all they require is a nominal amount of electricity, which they’re not even smart enough to want. When we design new computer technology to help solve problems, we get all the efficiency gains of a hyperspecialized trading partner without having to compete with it for resources. Properly understood, technology is the one way for us to escape competition in a globalizing world. As computers become more and more powerful, they won’t be substitutes for humans: they’ll be complements.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




COMPLEMENTARY BUSINESSES
#zeroto1
Complementarity between computers and humans isn’t just a macro-scale fact. It’s also the path to building a great business. I came to understand this from my experience at PayPal. In mid-2000, we had survived the dot-com crash and we were growing fast, but we faced one huge problem: we were losing upwards of $10 million to credit card fraud every month. Since we were processing hundreds or even thousands of transactions per minute, we couldn’t possibly review each one—no human quality control team could work that fast. So we did what any group of engineers would do: we tried to automate a solution. First, Max Levchin assembled an elite team of mathematicians to study the fraudulent transfers in detail. Then we took what we learned and wrote software to automatically identify and cancel bogus transactions in real time. But it quickly became clear that this approach wouldn’t work either: after an hour or two, the thieves would catch on and change their tactics. We were dealing with an adaptive enemy, and our software couldn’t adapt in response. The fraudsters’ adaptive evasions fooled our automatic detection algorithms, but we found that they didn’t fool our human analysts as easily. So Max and his engineers rewrote the software to take a hybrid approach: the computer would flag the most suspicious transactions on a well-designed user interface, and human operators would make the final judgment as to their legitimacy. Thanks to this hybrid system—we named it “Igor,” after the Russian fraudster who bragged that we’d never be able to stop him—we turned our first quarterly profit in the first quarter of 2002 (as opposed to a quarterly loss of $29.3 million one year before). The FBI asked us if we’d let them use Igor to help detect financial crime. And Max was able to boast, grandiosely but truthfully, that he was “the Sherlock Holmes of the Internet Underground.” This kind of man-machine symbiosis enabled PayPal to stay in business, which in turn enabled hundreds of thousands of small businesses to accept the payments they needed to thrive on the internet. None of it would have been possible without the man-machine solution—even though most people would never see it or even hear about it. I continued to think about this after we sold PayPal in 2002: if humans and computers together could achieve dramatically better results than either could attain alone, what other valuable businesses could be built on this core principle? The next year, I pitched Alex Karp, an old Stanford classmate, and Stephen Cohen, a software engineer, on a new startup idea: we would use the humancomputer hybrid approach from PayPal’s security system to identify terrorist networks and financial fraud. We already knew the FBI was interested, and in 2004 we founded Palantir, a software company that helps people extract insight from divergent sources of information. The company is on track to book sales of $1 billion in 2014, and Forbes has called Palantir’s software the “killer app” for its rumored role in helping the government locate Osama bin Laden. We have no details to share from that operation, but we can say that neither human intelligence by itself nor computers alone will be able to make us safe. America’s two biggest spy agencies take opposite approaches: The Central Intelligence Agency is run by spies who privilege humans. The National Security Agency is run by generals who prioritize computers. CIA analysts have to wade through so much noise that it’s very difficult to identify the most serious threats. NSA computers can process huge quantities of data, but machines alone cannot authoritatively determine whether someone is plotting a terrorist act. Palantir aims to transcend these opposing biases: its software analyzes the data the government feeds it—phone records of radical clerics in Yemen or bank accounts linked to terror cell activity, for instance—and flags suspicious activities for a trained analyst to review. In addition to helping find terrorists, analysts using Palantir’s software have been able to predict where insurgents plant IEDs ...
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




The Ideology of Computer Science
#zeroto1
Why do so many people miss the power of complementarity? It starts in school. Software engineers tend to work on projects that replace human efforts because that’s what they’re trained to do. Academics make their reputations through specialized research; their primary goal is to publish papers, and publication means respecting the limits of a particular discipline. For computer scientists, that means reducing human capabilities into specialized tasks that computers can be trained to conquer one by one. Just look at the trendiest fields in computer science today. The very term “machine learning” evokes imagery of replacement, and its boosters seem to believe that computers can be taught to perform almost any task, so long as we feed them enough training data. Any user of Netflix or Amazon has experienced the results of machine learning firsthand: both companies use algorithms to recommend products based on your viewing and purchase history. Feed them more data and the recommendations get ever better. Google Translate works the same way, providing rough but serviceable translations into any of the 80 languages it supports—not because the software understands human language, but because it has extracted patterns through statistical analysis of a huge corpus of text. The other buzzword that epitomizes a bias toward substitution is “big data.” Today’s companies have an insatiable appetite for data, mistakenly believing that more data always creates more value. But big data is usually dumb data. Computers can find patterns that elude humans, but they don’t know how to compare patterns from different sources or how to interpret complex behaviors. Actionable insights can only come from a human analyst (or the kind of generalized artificial intelligence that exists only in science fiction). We have let ourselves become enchanted by big data only because we exoticize technology. We’re impressed with small feats accomplished by computers alone, but we ignore big achievements from complementarity because the human contribution makes them less uncanny. Watson, Deep Blue, and ever-better machine learning algorithms are cool. But the most valuable companies in the future won’t ask what problems can be solved with computers alone. Instead, they’ll ask: how can computers help humans solve hard problems?
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




EVER-SMARTER COMPUTERS: FRIEND OR FOE?
#zeroto1
The future of computing is necessarily full of unknowns. It’s become conventional to see ever-smarter anthropomorphized robot intelligences like Siri and Watson as harbingers of things to come; once computers can answer all our questions, perhaps they’ll ask why they should remain subservient to us at all. The logical endpoint to this substitutionist thinking is called “strong AI”: computers that eclipse humans on every important dimension. Of course, the Luddites are terrified by the possibility. It even makes the futurists a little uneasy; it’s not clear whether strong AI would save humanity or doom it. Technology is supposed to increase our mastery over nature and reduce the role of chance in our lives; building smarter-than-human computers could actually bring chance back with a vengeance. Strong AI is like a cosmic lottery ticket: if we win, we get utopia; if we lose, Skynet substitutes us out of existence. But even if strong AI is a real possibility rather than an imponderable mystery, it won’t happen anytime soon: replacement by computers is a worry for the 22nd century. Indefinite fears about the far future shouldn’t stop us from making definite plans today. Luddites claim that we shouldn’t build the computers that might replace people someday; crazed futurists argue that we should. These two positions are mutually exclusive but they are not exhaustive: there is room in between for sane people to build a vastly better world in the decades ahead. As we find new ways to use computers, they won’t just get better at the kinds of things people already do; they’ll help us to do what was previously unimaginable.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




Clean energy
#zeroto1
AT THE START of the 21st century, everyone agreed that the next big thing was clean technology. It had to be: in Beijing, the smog had gotten so bad that people couldn’t see from building to building—even breathing was a health risk. Bangladesh, with its arsenic-laden water wells, was suffering what the New York Times called “the biggest mass poisoning in history.” In the U.S., Hurricanes Ivan and Katrina were said to be harbingers of the coming devastation from global warming. Al Gore implored us to attack these problems “with the urgency and resolve that has previously been seen only when nations mobilized for war.” People got busy: entrepreneurs started thousands of cleantech companies, and investors poured more than $50 billion into them. So began the quest to cleanse the world. It didn’t work. Instead of a healthier planet, we got a massive cleantech bubble. Solyndra is the most famous green ghost, but most cleantech companies met similarly disastrous ends—more than 40 solar manufacturers went out of business or filed for bankruptcy in 2012 alone. The leading index of alternative energy companies shows the bubble’s dramatic deflation: Why did cleantech fail? Conservatives think they already know the answer: as soon as green energy became a priority for the government, it was poisoned.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




But there really were (and there still are) good reasons for making energy a priority. And the truth about cleantech is more complex and more important than government failure. Most cleantech companies crashed because they neglected one or more of the seven questions that every business must answer:
1. The Engineering Question Can you create breakthrough technology instead of incremental improvements?
2. The Timing Question Is now the right time to start your particular business?
3. The Monopoly Question Are you starting with a big share of a small market?
4. The People Question Do you have the right team?
5. The Distribution Question Do you have a way to not just create but deliver your product?
6. The Durability Question Will your market position be defensible 10 and 20 years into the future?
7. The Secret Question Have you identified a unique opportunity that others don’t see?

We’ve discussed these elements before. Whatever your industry, any great business plan must address every one of them. If you don’t have good answers to these questions, you’ll run into lots of “bad luck” and your business will fail. If you nail all seven, you’ll master fortune and succeed. Even getting five or six correct might work. But the striking thing about the cleantech bubble was that people were starting companies with zero good answers—and that meant hoping for a miracle. It’s hard to know exactly why any particular cleantech company failed, since almost all of them made several serious mistakes. But since any one of those mistakes is enough to doom your company, it’s worth reviewing cleantech’s losing scorecard in more detail.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




THE ENGINEERING QUESTION
#zeroto1
A great technology company should have proprietary technology an order of magnitude better than its nearest substitute. But cleantech companies rarely produced 2x, let alone 10x, improvements. Sometimes their offerings were actually worse than the products they sought to replace. Solyndra developed novel, cylindrical solar cells, but to a first approximation, cylindrical cells are only 1 / π as efficient as flat ones—they simply don’t receive as much direct sunlight. The company tried to correct for this deficiency by using mirrors to reflect more sunlight to hit the bottoms of the panels, but it’s hard to recover from a radically inferior starting point. Companies must strive for 10x better because merely incremental improvements often end up meaning no improvement at all for the end user. Suppose you develop a new wind turbine that’s 20% more efficient than any existing technology—when you test it in the laboratory. That sounds good at first, but the lab result won’t begin to compensate for the expenses and risks faced by any new product in the real world. And even if your system really is 20% better on net for the customer who buys it, people are so used to exaggerated claims that you’ll be met with skepticism when you try to sell it. Only when your product is 10x better can you offer the customer transparent superiority
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




THE TIMING QUESTION
#zeroto1
Cleantech entrepreneurs worked hard to convince themselves that their appointed hour had arrived. When he announced his new company in 2008, SpectraWatt CEO Andrew Wilson stated that “[t]he solar industry is akin to where the microprocessor industry was in the late 1970s. There is a lot to be figured out and improved.” The second part was right, but the microprocessor analogy was way off. Ever since the first microprocessor was built in 1970, computing advanced not just rapidly but exponentially. Look at Intel’s early product release history: The first silicon solar cell, by contrast, was created by Bell Labs in 1954—more than a half century before Wilson’s press release. Photovoltaic efficiency improved in the intervening decades, but slowly and linearly: Bell’s first solar cell had about 6% efficiency; neither today’s crystalline silicon cells nor modern thin-film cells have exceeded 25% efficiency in the field. There were few engineering developments in the mid-2000s to suggest impending liftoff. Entering a slow-moving market can be a good strategy, but only if you have a definite and realistic plan to take it over. The failed cleantech companies had none.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on




THE MONOPOLY QUESTION
#zeroto1
In 2006, billionaire technology investor John Doerr announced that “green is the new red, white and blue.” He could have stopped at “red.” As Doerr himself said, “Internet-sized markets are in the billions of dollars; the energy markets are in the trillions.” What he didn’t say is that huge, trilliondollar markets mean ruthless, bloody competition. Others echoed Doerr over and over: in the 2000s, I listened to dozens of cleantech entrepreneurs begin fantastically rosy PowerPoint presentations with all-too-true tales of trillion-dollar markets—as if that were a good thing. Cleantech executives emphasized the bounty of an energy market big enough for all comers, but each one typically believed that his own company had an edge. In 2006, Dave Pearce, CEO of solar manufacturer MiaSolé, admitted to a congressional panel that his company was just one of several “very strong” startups working on one particular kind of thin-film solar cell development. Minutes later, Pearce predicted that MiaSolé would become “the largest producer of thin-film solar cells in the world” within a year’s time. That didn’t happen, but it might not have helped them anyway: thinfilm is just one of more than a dozen kinds of solar cells. Customers won’t care about any particular technology unless it solves a particular problem in a superior way. And if you can’t monopolize a unique solution for a small market, you’ll be stuck with vicious competition. That’s what happened to MiaSolé, which was acquired in 2013 for hundreds of millions of dollars less than its investors had put into the company. Exaggerating your own uniqueness is an easy way to botch the monopoly question. Suppose you’re running a solar company that’s successfully installed hundreds of solar panel systems with a combined power generation capacity of 100 megawatts. Since total U.S. solar energy production capacity is 950 megawatts, you own 10.53% of the market. Congratulations, you tell yourself: you’re a player. But what if the U.S. solar energy market isn’t the relevant market? What if the relevant market is the global solar market, with a production capacity of 18 gigawatts? Your 100 megawatts now makes you a very small fish indeed: suddenly you own less than 1% of the market. And what if the appropriate measure isn’t global solar, but rather renewable energy in general? Annual production capacity from renewables is 420 gigawatts globally; you just shrank to 0.02% of the market. And compared to the total global power generation capacity of 15,000 gigawatts, your 100 megawatts is just a drop in the ocean. Cleantech entrepreneurs’ thinking about markets was hopelessly confused. They would rhetorically shrink their market in order to seem differentiated, only to turn around and ask to be valued based on huge, supposedly lucrative markets. But you can’t dominate a submarket if it’s fictional, and huge markets are highly competitive, not highly attainable. Most cleantech founders would have been better off opening a new British restaurant in downtown Palo Alto.
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on