Showing posts with label complexity. Show all posts
Showing posts with label complexity. Show all posts

Thursday, July 14, 2022

Tim Palmer (Oxford): Status and Future of Climate Modeling — Manifold Podcast #16

 

Tim Palmer is Royal Society Research Professor in Climate Physics, and a Senior Fellow at the Oxford Martin Institute. He is interested in the predictability and dynamics of weather and climate, including extreme events. 

He was involved in the first five IPCC assessment reports and was co-chair of the international scientific steering group of the World Climate Research Programme project (CLIVAR) on climate variability and predictability. 

After completing his DPhil at Oxford in theoretical physics, Tim worked at the UK Meteorological Office and later the European Centre for Medium-Range Weather Forecasts. For a large part of his career, Tim has developed ensemble methods for predicting uncertainty in weather and climate forecasts. 

In 2020 Tim was elected to the US National Academy of Sciences. 

Steve, Corey Washington, and Tim first discuss his career path from physics to climate research and then explore the science of climate modeling and the main uncertainties in state-of-the-art models. 

In this episode, we discuss: 

00:00 Introduction 
1:48 Tim Palmer's background and transition from general relativity to climate modeling 
15:13 Climate modeling uncertainty 
46:41 Navier-Stokes equations in climate modeling 
53:37 Where climate change is an existential risk 
1:01:26 Investment in climate research 

Links: 
 
Tim Palmer (Oxford University) 

The scientific challenge of understanding and estimating climate change (2019) https://www.pnas.org/doi/pdf/10.1073/pnas.1906691116 

ExtremeEarth 

Physicist Steve Koonin on climate change


Note added
: For some background on the importance of water vapor (cloud) distribution within the primitive cells used in these climate simulations, see:


Low clouds trap IR radiation near the Earth, while high clouds reflect solar energy back into space. The net effect on heating from the distribution of water vapor is crucial in these models. However, due to the complexity of the Navier-Stokes equations, current simulations cannot actually solve for this distribution from first principles. Rather, the modelers hand code assumptions about fine grained behavior within each cell. The resulting uncertainty in (e.g., long term) climate prediction from these approximations is unknown.

Wednesday, May 24, 2017

AI knows best: AlphaGo "like a God"


Humans are going to have to learn to "trust the AI" without understanding why it is right. I often make an analogous point to my kids -- "At your age, if you and Dad disagree, chances are that Dad is right" :-)  Of course, I always try to explain the logic behind my thinking, but in the case of some complex machine optimizations (e.g., Go strategy), humans may not be able to understand even the detailed explanations.

In some areas of complex systems -- neuroscience, genomics, molecular dynamics -- we also see machine prediction that is superior to other methods, but difficult even for scientists to understand. When hundreds or thousands of genes combine to control many dozens of molecular pathways, what kind of explanation can one offer for why a particular setting of the controls (DNA pattern) works better than another?

There was never any chance that the functioning of a human brain, the most complex known object in the universe, could be captured in verbal explication of the familiar kind (non-mathematical, non-algorithmic). The researchers that built AlphaGo would be at a loss to explain exactly what is going on inside its neural net...
NYTimes: ... “Last year, it was still quite humanlike when it played,” Mr. Ke said after the game. “But this year, it became like a god of Go.”

... After he finishes this week’s match, he said, he would focus more on playing against human opponents, noting that the gap between humans and computers was becoming too great. He would treat the software more as a teacher, he said, to get inspiration and new ideas about moves.

“AlphaGo is improving too fast,” he said in a news conference after the game. “AlphaGo is like a different player this year compared to last year.”
On earlier encounters with AlphGo:
“After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong,” Mr. Ke, 19, wrote on Chinese social media platform Weibo after his defeat. “I would go as far as to say not a single human has touched the edge of the truth of Go.”

Sunday, June 26, 2011

Machines of Loving Grace

"The story of the rise of the machines, and why no one believes you can change the world for the better anymore."



This is episode 3 of the BBC documentary series All Watched Over by Machines of Loving Grace, featuring evolutionary theorists George Price (@19 & 34 & 42 minutes in) and William Hamilton. Also making appearances: Dian Fossey (@27 & 31 & 40 min), John von Neumann (@21 min), Richard Dawkins (@44 min), gorillas, chimps, Hutus, Tutsis, imperialists, mercenaries, and a mercenary (sociopathic?) mining company CEO (@7:30 min; good Hamilton gene stuff follows). Fossey is portrayed as a nut case; some of the details (e.g., about the death of her favorite gorilla Digit) do not match what is in her Wikipedia entry.

Bonus points to commenters who can classify the individuals by their V, M, and empathy scores ;-)

The episode describes the rise of the idea that humans are merely automata, sometimes altruistic and sometimes murderous, programmed by their genes. This perspective, it is claimed, is appealing because it absolves us of responsibility for terrible unintended consequences of our actions, such as the Rwandan genocide.

The logic of Machines of Loving Grace is sometimes suspect, but nevertheless it's a highly stimulating series.

Tuesday, March 29, 2011

Statins, cholesterol and medical science

Do statins work? Does high cholesterol cause heart disease? Are people who doubt the conventional wisdom on these two topics excessively skeptical conspiracy wonks? Or is big pharma pulling a fast one on the public by pushing statins?

See also this paper which is mentioned in the article below and which summarizes results of studies from 2008-2010 (response by authors to criticism). See here for earlier discussion on the overall quality of medical research.

MedConnect: ... Dr. Kausik K. Ray of the University of Cambridge (England) and his associates performed a meta-analysis of 11 randomized controlled trials that assessed the effects on all-cause mortality of statins versus a placebo or control therapies on all-cause mortality. They restricted their analysis to data on high-risk patients with no known cardiovascular disease and included previously unpublished data, “to provide the most robust information to date” on statins as primary prevention in this patient group.

The metaanalysis involved 65,229 men and women in predominantly Western populations, with approximately 244,000 person-years of follow-up. There were 2,793 deaths during an average of 4 years of follow-up.

All-cause mortality was not significantly different between patients taking statins and those taking placebo or control therapies. This suggests that “the all-cause mortality reduction of 20% reported in JUPITER is likely to be an extreme and exaggerated finding, as often occurs when trials are stopped early,” Dr. Ray and his colleagues said (Arch. Intern. Med. 2010;170;1,024-31).

This meta-analysis shows that statin therapy as primary prevention in high-risk patients is less beneficial than is generally perceived, and it can be inferred to be even less helpful in low-risk patients, they added.

Tuesday, December 21, 2010

Scaling laws for cities



Shorter Geoff West on cities (isn't math great?): resource consumption scales sub-linearly with population, whereas economic output scales super-linearly.

Uncompressed version follows below and at link ;-)

NYTimes: ... After two years of analysis, West and Bettencourt discovered that all of these urban variables could be described by a few exquisitely simple equations. For example, if they know the population of a metropolitan area in a given country, they can estimate, with approximately 85 percent accuracy, its average income and the dimensions of its sewer system. These are the laws, they say, that automatically emerge whenever people “agglomerate,” cramming themselves into apartment buildings and subway cars. It doesn’t matter if the place is Manhattan or Manhattan, Kan.: the urban patterns remain the same. West isn’t shy about describing the magnitude of this accomplishment. “What we found are the constants that describe every city,” he says. “I can take these laws and make precise predictions about the number of violent crimes and the surface area of roads in a city in Japan with 200,000 people. I don’t know anything about this city or even where it is or its history, but I can tell you all about it. And the reason I can do that is because every city is really the same.” After a pause, as if reflecting on his hyperbole, West adds: “Look, we all know that every city is unique. That’s all we talk about when we talk about cities, those things that make New York different from L.A., or Tokyo different from Albuquerque. But focusing on those differences misses the point. Sure, there are differences, but different from what? We’ve found the what.”

There is something deeply strange about thinking of the metropolis in such abstract terms. [REALLY?!?] We usually describe cities, after all, as local entities defined by geography and history. New Orleans isn’t a generic place of 336,644 people. It’s the bayou and Katrina and Cajun cuisine. New York isn’t just another city. It’s a former Dutch fur-trading settlement, the center of the finance industry and home to the Yankees. And yet, West insists, those facts are mere details, interesting anecdotes that don’t explain very much. The only way to really understand the city, West says, is to understand its deep structure, its defining patterns, which will show us whether a metropolis will flourish or fall apart. We can’t make our cities work better until we know how they work. And, West says, he knows how they work.

West has been drawn to different fields before. In 1997, less than five years after he transitioned away from high-energy physics, he published one of the most contentious and influential papers in modern biology. (The research, which appeared in Science, has been cited more than 1,500 times.) The last line of the paper summarizes the sweep of its ambition, as West and his co-authors assert that they have just solved “the single most pervasive theme underlying all biological diversity,” showing how the most vital facts about animals — heart rate, size, caloric needs — are interrelated in unexpected ways.

... In city after city, the indicators of urban “metabolism,” like the number of gas stations or the total surface area of roads, showed that when a city doubles in size, it requires an increase in resources of only 85 percent.

This straightforward observation has some surprising implications. It suggests, for instance, that modern cities are the real centers of sustainability. According to the data, people who live in densely populated places require less heat in the winter and need fewer miles of asphalt per capita. (A recent analysis by economists at Harvard and U.C.L.A. demonstrated that the average Manhattanite emits 14,127 fewer pounds of carbon dioxide annually than someone living in the New York suburbs.) Small communities might look green, but they consume a disproportionate amount of everything. As a result, West argues, creating a more sustainable society will require our big cities to get even bigger. We need more megalopolises.

But a city is not just a frugal elephant; biological equations can’t entirely explain the growth of urban areas. While the first settlements in Mesopotamia might have helped people conserve scarce resources — irrigation networks meant more water for everyone — the concept of the city spread for an entirely different reason. “In retrospect, I was quite stupid,” West says. He was so excited by the parallels between cities and living things that he “didn’t pay enough attention to the ways in which urban areas and organisms are completely different.”

What Bettencourt and West failed to appreciate, at least at first, was that the value of modern cities has little to do with energy efficiency. As West puts it, “Nobody moves to New York to save money on their gas bill.” Why, then, do we put up with the indignities of the city? Why do we accept the failing schools and overpriced apartments, the bedbugs and the traffic?

In essence, they arrive at the sensible conclusion that cities are valuable because they facilitate human interactions, as people crammed into a few square miles exchange ideas and start collaborations. “If you ask people why they move to the city, they always give the same reasons,” West says. “They’ve come to get a job or follow their friends or to be at the center of a scene. That’s why we pay the high rent. Cities are all about the people, not the infrastructure.”

It’s when West switches the conversation from infrastructure to people that he brings up the work of Jane Jacobs, the urban activist and author of “The Death and Life of Great American Cities.” Jacobs was a fierce advocate for the preservation of small-scale neighborhoods, like Greenwich Village and the North End in Boston. The value of such urban areas, she said, is that they facilitate the free flow of information between city dwellers. To illustrate her point, Jacobs described her local stretch of Hudson Street in the Village. She compared the crowded sidewalk to a spontaneous “ballet,” filled with people from different walks of life. School kids on the stoops, gossiping homemakers, “business lunchers” on their way back to the office. While urban planners had long derided such neighborhoods for their inefficiencies — that’s why Robert Moses, the “master builder” of New York, wanted to build an eight-lane elevated highway through SoHo and the Village — Jacobs insisted that these casual exchanges were essential. She saw the city not as a mass of buildings but rather as a vessel of empty spaces, in which people interacted with other people. The city wasn’t a skyline — it was a dance.

If West’s basic idea was familiar, however, the evidence he provided for it was anything but. The challenge for Bettencourt and West was finding a way to quantify urban interactions. As usual, they began with reams of statistics. The first data set they analyzed was on the economic productivity of American cities, and it quickly became clear that their working hypothesis — like elephants, cities become more efficient as they get bigger — was profoundly incomplete. According to the data, whenever a city doubles in size, every measure of economic activity, from construction spending to the amount of bank deposits, increases by approximately 15 percent per capita. It doesn’t matter how big the city is; the law remains the same. “This remarkable equation is why people move to the big city,” West says. “Because you can take the same person, and if you just move them to a city that’s twice as big, then all of a sudden they’ll do 15 percent more of everything that we can measure.” While Jacobs could only speculate on the value of our urban interactions, West insists that he has found a way to “scientifically confirm” her conjectures. “One of my favorite compliments is when people come up to me and say, ‘You have done what Jane Jacobs would have done, if only she could do mathematics,’ ” West says. “What the data clearly shows, and what she was clever enough to anticipate, is that when people come together, they become much more productive.” ...

Sunday, May 09, 2010

Climate change priors and posteriors



I recommend this nice discussion of climate change on Andrew Gelman's blog. Physicist Phil, the guest-author of the post, gives his prior and posterior probability distribution for temperature sensitivity as a function of CO2 density. I guess I'm somewhere between Skeptic and Phil Prior.

As an aside, I think it is worth distinguishing between a situation where one has a high confidence level about a probability distribution (e.g., at an honest casino game like roulette or blackjack) versus in the real world, where even the pdf itself isn't known with any confidence (Knightian uncertainty). Personally, I am in the latter situation with climate science.

Here is an excerpt from a skeptic's comment on the post:

... So where are we on global climate change? We have some basic physics that predicts some warming caused by CO2, but a lot of positive and negative feedbacks that could amplify and attenuate temperature increases. We have computer models we can't trust for a variety of reasons. We have temperature station data that might have been corrupted by arbitrary "adjustments" to produce a warming trend. We have the north polar ice area decreasing, while the south polar ice area is constant or increasing. Next year an earth satellite will launch that should give us good measurements of polar ice thickness using radar. Let's hope that data doesn't get corrupted. We have some alternate theories to explain temperature increases such as cosmic ray flux. All this adds up to a confused and uncertain picture. The science is hardly "settled."

Finally the public is not buying AGW. Anyone with common sense can see that the big funding governments have poured into climate science has corrupted it. Until this whole thing gets an independent review from trustworthy people, it will not enjoy general acceptance. You can look for that at the ballot box next year.

For a dose of (justified?) certitude, see this angry letter, signed by numerous National Academy of Science members, that appeared in Science last week. See here for a systematic study of the record of expert predictions about complex systems. Scientists are only slightly less susceptible than others to group think.

Thursday, January 07, 2010

Wikipedia: emergent phenomenon?

Is Wikipedia a magical aggregator and filter of expertise from millions of different contributors? Or is it more like traditional encyclopedia projects, with a thousand or so core Wikipedians doing most of the work? The distribution of edits (a typical power law) supports the latter interpretation, but a detailed analysis of particular articles shows that important knowledge is injected by individuals who are not part of the core group.

Aaron Swartz: I first met Jimbo Wales, the face of Wikipedia, when he came to speak at Stanford. Wales told us about Wikipedia’s history, technology, and culture, but one thing he said stands out. “The idea that a lot of people have of Wikipedia,” he noted, “is that it’s some emergent phenomenon — the wisdom of mobs, swarm intelligence, that sort of thing — thousands and thousands of individual users each adding a little bit of content and out of this emerges a coherent body of work.”† But, he insisted, the truth was rather different: Wikipedia was actually written by “a community … a dedicated group of a few hundred volunteers” where “I know all of them and they all know each other”. Really, “it’s much like any traditional organization.”

The difference, of course, is crucial. Not just for the public, who wants to know how a grand thing like Wikipedia actually gets written, but also for Wales, who wants to know how to run the site. “For me this is really important, because I spend a lot of time listening to those four or five hundred and if … those people were just a bunch of people talking … maybe I can just safely ignore them when setting policy” and instead worry about “the million people writing a sentence each”.

So did the Gang of 500 actually write Wikipedia? Wales decided to run a simple study to find out: he counted who made the most edits to the site. “I expected to find something like an 80-20 rule: 80% of the work being done by 20% of the users, just because that seems to come up a lot. But it’s actually much, much tighter than that: it turns out over 50% of all the edits are done by just .7% of the users … 524 people. … And in fact the most active 2%, which is 1400 people, have done 73.4% of all the edits.” The remaining 25% of edits, he said, were from “people who [are] contributing … a minor change of a fact or a minor spelling fix … or something like that.” ...

[But what if we analyze the amount of text contributed by each person, not just the number of edits? See original for analysis of edit patterns of specific articles, including amount of text added.]

... When you put it all together, the story become clear: an outsider makes one edit to add a chunk of information, then insiders make several edits tweaking and reformatting it. In addition, insiders rack up thousands of edits doing things like changing the name of a category across the entire site — the kind of thing only insiders deeply care about. As a result, insiders account for the vast majority of the edits. But it’s the outsiders who provide nearly all of the content.

And when you think about it, this makes perfect sense. Writing an encyclopedia is hard. To do anywhere near a decent job, you have to know a great deal of information about an incredibly wide variety of subjects. Writing so much text is difficult, but doing all the background research seems impossible.

On the other hand, everyone has a bunch of obscure things that, for one reason or another, they’ve come to know well. So they share them, clicking the edit link and adding a paragraph or two to Wikipedia. At the same time, a small number of people have become particularly involved in Wikipedia itself, learning its policies and special syntax, and spending their time tweaking the contributions of everybody else.

Tuesday, June 30, 2009

Fat tails and the cubic law of returns

I came across this nice review article on power law distributions in economic and other contexts. A particularly interesting one is the following, governing short term stock price fluctuations -- surprise, it's not log normal!

6.1.1. The inverse cubic law distribution of stock price fluctuations:

The tail distribution of short-term (15 s to a few days) returns has been analyzed in a series of studies on data sets, with a few thousands of data points (Jansen & de Vries 1991, Lux 1996, Mandelbrot 1963), then with an ever increasing number of data points: Mantegna & Stanley (1995) used 2 million data points, whereas Gopikrishnan et al. (1999) used over 200 million data points. Gopikrishnan et al. (1999) established a strong case for a inverse cubic PL of stock market returns.

...Such a fat-tail PL yields a large number of tail events. Considering that the typical standard daily deviation of a stock is approximately 2%, a 10–standard deviations event is a day in which the stock price moves by at least 20%. From daily experience, the reader can see that those moves are not rare at all: Essentially every week a 10–standard deviations event occurs for one of the (few thousand) stocks in the market.28 The cubic law quantifies that notion and states that a 10–standard deviations event and a 20–standard deviations event are 5^3 = 125 and 10^3 = 1000 times less likely, respectively, than a 2–standard deviations event.

The figure below shows the probability distribution of 15 minute returns on 1000 large company stocks from data taken in 1994-1995. (Click for larger version.)



Here is a figure showing the famous power law scaling of metabolic rate with body mass in animals (click for larger version):

Monday, September 29, 2008

Complexity illustrated: Lehman WAS too connected to fail

This WSJ article illustrates what I discussed more abstractly in this earlier post Notional vs net: complexity is our enemy. The story claims that by allowing Lehman to fail, Treasury and the Fed triggered the final stage of the crisis that got us to where we are today. I've included my figures from the earlier post here.

...in an age where markets, banks and investors are linked through a web of complex and opaque financial relationships, the pain of letting a large institution go has proved almost overwhelming.

In hindsight, some critics say the systemic crisis that has emerged since the Lehman collapse could have been avoided if the government had stepped in.





The Fed had been pushing Wall Street firms for months to set up a new clearinghouse for credit-default swaps. The idea was to provide a more orderly settlement of trades in this opaque, diffuse market with a staggering $55 trillion in notional value, and, among other things, make the market less vulnerable if a major dealer failed. But that hadn't gotten off the ground. As a result, nobody knew exactly which firms had made trades with Lehman and for what amounts. On Monday, those trades would be stuck in limbo. In a last-ditch effort to ease the problem, New York Fed staff worked with Lehman officials and the firm's major trading partners to figure out which firms were on opposite sides of trades with Lehman and cancel them out. If, for example, two of Lehman's trading partners had made opposite bets on the debt of General Motors Corp., they could cancel their trades with Lehman and face each other directly instead.

This figure shows three trades which almost cancel. Remove one of the counterparties and you have chaos instead of hedges. In a last ditch effort, after letting Lehman fail, Treasury tried to cancel these trades out manually -- good luck! Why did we not have a central exchange in place earlier?


Oops, there goes AIG! (Big issuer of CDS insurance.)

The reaction was most evident in the massive credit-default-swap market, where the cost of insurance against bond defaults shot up Monday in its largest one-day rise ever. In the U.S., the average cost of five-year insurance on $10 million in debt rose to $194,000 from $152,000 Friday, according to the Markit CDX index.

When the cost of default insurance rises, that generates losses for sellers of insurance, such as banks, hedge funds and insurance companies. At the same time, those sellers must put up extra cash as collateral to guarantee they will be able to make good on their obligations. On Monday alone, sellers of insurance had to find some $140 billion to make such margin calls, estimates asset-management firm Bridgewater Associates. As investors scrambled to get the cash, they were forced to sell whatever they could -- a liquidation that hit financial markets around the world. ...AIG was one of the biggest sellers in the default insurance market, with contracts outstanding on more than $400 billion in bonds.

To make matters worse, actual trading in the CDS market declined to a trickle as players tried to assess how much of their money was tied up in Lehman. The bankruptcy meant that many hedge funds and banks that were on the profitable side of a trade with Lehman were now out of luck because they couldn't collect their money.

...At around 7 a.m. Tuesday in New York, the market got its first jolt of how bad the day was going to be: In London, the British Bankers' Association reported a huge rise in the London interbank offered rate, a benchmark that is supposed to reflect banks' borrowing costs. In its sharpest spike ever, overnight dollar Libor had risen to 6.44% from 3.11%. But even at those rates, banks were balking at lending to one another.

Who was next after AIG? Time for a bailout!

...Goldman, Paulson's former employer, had up to $20B of CDS exposure to AIG. The current head of Goldman was the only Wall St. executive invited to the meetings between AIG and the government. Conflict of interest for soon to be King Henry Paulson?

Blog Archive

Labels