Pages

Wednesday, August 31, 2011

Will AI cause the extinction of humans?

Yesterday, at the 2011 FQXi conference in Copenhagen, Jaan Tallinn told us he is concerned. And he is not a man of petty worries. Some of us may be concerned they’ll be late for lunch or make a fool of themselves with that blogpost. Tallinn is concerned that once we have created an artificial intelligence (AI) superior to humans, the AIs will wipe us out. He said he has no doubt we will create an AI in the near future and he wishes that more people would think about the risk of dealing with a vastly more intelligent species.

Tallinn looks like a nice guy and he dresses very well and I wish I had something intelligent to tell him. But actually it’s not a topic I know very much about. Then I thought, what better place to talk about a topic I know nothing about than my blog!

Let me first say I think the road to AI will be much longer than Tallinn believes. It’s not the artificial creation of something brain-like with as many synapses and neurons that’s the difficult part. The difficult part is creating something that runs as stable as the human body for a sufficiently long time to learn how the world works. In the end I believe we’ll go the way of enhancing human intelligence rather than creating new ones from scratch.

In any case, if you would indeed create an AI, you might think of making humans indispensible for their existence, maybe like bacteria are for humans. If they’re intelligent enough, they’ll sooner or later find a way to get rid of us, but at least it’ll buy you time. You might achieve that for example by never building any AI with their own sensory and motor equipment, but make them dependent on the human body for that. You could do that by implanting your AI into the, still functional, body of braindead people. That would get you in a situation though where the AIs would regard humans, though indispensable, as something to grow and harvest for their own needs. Ie, once you’re adult and have reproduced, they’ll take out your brain and move in. Well, it kind of does solve the problem in the sense that it avoids the extinction of the human species, but I’m not sure that’s a rosy future for humanity either.

I don’t think that an intelligent species will be inherently evil and just remove us from the planet. Look, even we try to avoid the extinction of species on the planet. Yes, we do grow and eat other animals but that I think is a temporary phase. It is arguably not a very efficient use of resources and I think meat will be replaced sooner or later with something factory-made. You don’t need to be very intelligent to understand that life is precious. You don’t destroy it without a reason because it takes time and resources to create. The way you destroy it is with negligence or call it stupidity. So if you want to survive your AI you better make them really intelligent.

Ok, I’m not taking this very seriously. Thing is, I don’t really understand why I should be bothered about the extinction of humans if there’s some more intelligent species taking over. Clearly, I don’t want anybody to suffer in the transition and I do hope the AI will preserve elements of human culture. But that I believe is what an intelligent species would do anyway. If you don’t like the steepness of the transition and want more continuous predecessors of humans, then you might want to go the way I’ve mentioned above, the way of enhancing the human body rather than starting from scratch. Sooner or later genetic modifications of humans will take place anyway, legal or not.

In the end, it comes down to the question what you mean by “artificial.” You could argue that since humans are part of nature, nothing human made is more “artificial” than, say, a honeycomb. So I would suggest then instead of creating an artificial intelligence, let’s go for natural intelligence.

Monday, August 29, 2011

FQXi Conference 2011

We just arrived in Copenhagen after a 2-day trip on the National Geographic Explorer, a medium sized cruise ship, along Norway’s coast. On board were about 130 scientists and a couple of spouses in different sizes, plus an incredibly efficient, friendly, and competent crew that didn’t mind having nosy physicists hanging around on the bridge.

The 2011 FQXi conference turns out to be very different from the previous one (2009 on the Azores), and that not only thanks to the unique bonding experience of shared sea-sickness. As Sean Carroll mentioned the other day, during the organization of this conference on the nature of time, the FQXi folks were confronted with an application for a similar event with a similar topic and so they decided to join forces. As a result, this conference is larger and much more interdisciplinary than the previous one. Besides the physicists and philosophers, there are neurobiologists, biologists and psychologists, and a selection of guys interested in artificial intelligence from one or the other perspective, as well as a crew with cameras that are here for PBS I am told.

Among the physicists, the usual suspects are Max Tegmark and Anthony Aguirre, Paul Davies, George Ellis, David Albert, Garrett Lisi, Fotini Markopoulou, Julian Barbour, and Scott Aaronson. But there’s also Geoffrey West from the Santa Fe Institute, Jaan Tallinn, one of the developers of Skype, and David Eagleman the possibilian, just to mention a few. Also around are George Musser from Scientific American and Zeeya Merali who is blogging for FQXi here. There’s a list of alleged attendees here, though some of them I haven’t seen so far.

It is an interesting mix of people. I do enjoy interdisciplinary events a lot because there is always some cool research to learn about that I didn’t know of before. I have however grown skeptic about the benefits of interdisciplinarity when it comes to pushing forward on a particular problem. Take a topic such as free will or the origin of our impression of “now” that might or might not be an illusion. Yes, neurobiologists and psychologists have something to say about that. But they don’t in fact mean the same as physicists and I am not sure that, for example, the question how we achieve to remember the past and imagine the future, or fail to distinguish between true and false memory has any relevance for physicists trying to figure out the relevance of the past hypothesis, the consistency of alternatives to the block universe, or the role of observers in the multiverse. In fact, you already have people talking past each other within one discipline: If you ask three physicists what they mean with “free will” you’ll get four different answers. And after you’ve spent a significant amount of time figuring out what they mean to begin with there isn’t much left they have to say to each other.

That’s the downside of mixing academics – in my experience it does not add depth. Interdisciplinary exchange however adds breadth. Talking to somebody who has addressed a question for a completely different reason and with completely different methods helps one look at it from a different point of view, opening new ways forward. In my opinion though the largest benefit of events like this conference comes from just getting together a group of interesting and intelligent people who make an effort to listen to and complement each other. After some years at PI and NORDITA I’ve pretty much come to take for granted having plenty of folks at my disposal to talk to should I feel like it, but after the baby break I appreciate the opportunity for such exchange much more.

The idea with putting us on a ship was clearly to get us off the Internet for some while. I personally don’t have the impression people on the conferences I usually go make obsessive use of the internet, but evidently some need to have an evil third party as an excuse for not being available at least for a few days. I don’t find it such a great idea to punish all of us because a few guys can’t live without their newsfeed. I wasn’t the only one with family at home who would have appreciated at least a phone. (For an appropriate price that is. If you really, really had to you could have paid for an internet connection at $10 per kB or something like that.)

These are some first impressions. If I've had some time to process what I've heard and learned I might summarize some of the main questions that were discussed. But now (whatever that might be) I have to locate my baggage which I've last seen this morning vanishing into a bus somewhere.

Thursday, August 25, 2011

Away note

I'll be away for a week on the FQXi conference "Setting Time Aright." A significant amount of the participants are reportedly nuts, so I will be in good company. I'm supposed to moderate a session on "Choice" for reasons that are somewhat mysterious to me, but since I don't believe in free will I guess they had no choice, haha.

This is my first conference attendance since the babies and, believe me, it's required a significant amount of organization. It didn't help they're doing half of it on a ship and the idea of having to get around on a ship with a twin stroller didn't really appeal to Stefan and me. So I go, and Superdaddy stays with the babies while I'll cry over the no-signal sign on my BlackBerry. Side effects may include blogging congestion.

Sunday, August 21, 2011

Physics and Philosophie

I'm looking for topics where theoretical physics has a relevance for philosophy, for no particular reason other than my curiosity and maybe yours as well. Here's the usual suspects that came to my mind:

  1. Are there limits to what we can possibly know? The human brain has a finite capacity and computing power. What limits does this set? Is it possible to extend? What is consciousness?

  2. Why is the past different from the future? What is "The Now" and why do we have an "arrow of time"? (Or several?)

  3. Is there a fundamental theory that explains everything we observe and experience? Is this theory unique and does it explain everything only in principle or also in practice?

  4. Do we have free will? And what does that question mean?

  5. Are there cases where reductionism does not work? And what does that imply for 3?

  6. What is the role of chaos and uncertainty in the evolution of culture and civilization? Is it possible to reliably model and predict the dynamics of social systems? If so, what does that mean for 4?

  7. What is reality? What does it mean to "exist" and can an entirely mathematical theory explain this? Does everything mathematical exist in the same way? Why does anything exist at all?

  8. And Stefan submits: What is the ontological status of AdS/CFT?

Thursday, August 18, 2011

What makes you you?

Stefan's life is tough. When he comes home, instead of a cold beer (I support the local wineries) and dinner (ha-ha-ha) he gets one of the crying babies and a washcloth. And then there's his wife who lacks googletime and greets him with bizarre questions. What frequency does a CD player operate on? Something in the near infrared. How many atoms do you need to encode one bit? Maybe somewhat below the million it was in 2008. And why does he actually know all this stuff? Male brains are funny. He does not, for example, know that the Aspirin is in the medicine cabinet, out of all places. But yesterday he gave it a pass, so here's my question to you.

Suppose you have a transmitter, spaceship enterprise style. It reads all the information of all particles in your body (all necessary initial values), disintegrates your body, sends the information elsewhere, and reassembles it. Did you die in that process?



You could object that this process isn't physically possible, either theoretically or practically. Theoretically, there are for example the no-cloning and no-teleportation theorems in quantum information. But you might not actually need all the quantum details to reconstruct a human body. (I'm not sure though the role of quantum physics for consciousness has yet been entirely clarified.) And, if I reassemble you elsewhere you are arguably different in that the relative location of your body to all other objects in the universe has changed. But again, it doesn't seem like that's of any relevance. Or you could say that there won't be enough time to perform this process ever in the history in the universe or something like that. But these answers seem unsatisfactory to me.

Then you might say, well, if it looks like me, walks like me, and quacks like me, it probably is me. That is, nobody, including the person you have assembled could tell any difference. So that would seem like you didn't die.

On the other hand, the operation of your brain has a discontinuity in its timeline in the sense that it didn't do anything during transmission. That is in contrast to, say, anesthesia where your brain is actually quite active. (Interesting SciAm article on that here.) So that would seem like that what constitutes 'you' did cease to operate and 'you' did die.

But then again, who really cares if you stopped thinking for some seconds and then continued that process while in between you changed the set of quarks and electrons you're operating with. However, then consider now I don't send the information to one place, but to ten. And I assemble not one you, but ten. Which one are you?

Oh-uh, headache. I can understand Stefan does prefer to bath the baby. Now where is the Aspirin?

Sunday, August 14, 2011

Was there really a man on the moon? Are you sure?

Some weeks ago, the tree octopus made headlines again. If you had never heard of this creature before, don’t worry, it is an internet hoax used for classes on information literacy. It is easy enough to laugh about the naiveté of students believing in the tree octopus. Or people believing in spaghetti trees for that matter. Scientists in particular are obliged to carefully check all facts they use in their arguments. But in reality, none of us can check all the facts all the time. A lot of what we know is based on trust and an ethereal skill called ‘common sense.’ We’re born trusting adults tell us the truth – about the binky fairy. Most of us grow up adding a healthy dose of skepticism to any new information, but we still rely heavily on trusted sources and the belief that few people are willfully evil. What happened to that in the age of the internet?

When I write a paper, I usually make an effort to check that the references I am citing do actually show what they claim, at least to some level. Sometimes, digging out the roots of a citation tree holds spaghetti surprises. But especially when it comes to experiments, fact checking comes to a quick halt because it would simply take too much time putting under scrutiny each and everything. And then peer review has its shortcomings. In my daily news reading however I am far less careful. After all, I’m not being paid for it and I have better things to do than figuring out if every story I read (Can you really get stuck on an airplane’s vacuum toilet?) is true. Most of the time it doesn’t actually matter because, you see, urban legends are entertaining even if not true. And, well, don’t flush while you s it.

I think of myself as a very average person, so I guess that most of you use similar recipes as I to roughly estimate a trust-value of some online recource. The rule of thumb that I use is based on two simple questions: 1) How much effort would one have to make to fake this piece of information in the present form, and 2) How evil would one have to be.

How much effort would one have to make to put up a website about a non-existing animal? Well, you have to invest the time to write the text, get a domain, and upload it. I.e. not so very much. How evil do you have to be? For the purpose of teaching internet literacy, somebody probably believed he was being good. Trust-value of the tree-octopus: Nil. How much effort do you have to make to fake some governmental website? Some. And it’s probably illegal too, so does require some evil. How much effort would you have to make to fake the moon landing?

Of course such truth-value estimates have large error-bars. Faking somebody else’s writing style for example can be quite difficult (if it wasn’t I’d be writing like Jonathan Franzen), but depends on that writing style to begin with. If you’ve never registered a domain before you might vastly overestimate the effort it takes. And how difficult is it really to convince some billion people the Earth is round? (Well, almost.) Or to convince them some omniscient being is watching over them and taking note every time they think about somebody else’s underwear? There you go. (And Bielefeld, btw, doesn’t exist either.)

The trustworthiness of Wikipedia is a question with more than academic value. For better or worse, Wikipedia has become a daily source of reference for hundreds of millions of people. Its credibility comes from its articles being scrutinized by millions of eyes. Yet, it is very difficult to know how many and which people did indeed check some piece of information, and how much they were influenced by the already existent entry. The English Wikipedia site thus, very reasonably, has a policy that information needs to have a source. Reasonable as that may sound, it has its shortcoming, a point that was made very well in a recent NYT article by Noam Cohen who reports on a criticism by Achal Prabhala, an Indian advisor to the Wikimedia foundation.

There is arguably information about the real world that is not (yet?) to be found in any published sources. Think of something trivial like good places in your neighborhood to find blackberries (the fruit)1. More interesting, Prabhala offered the example of a children’s game played in some parts of India, and its Wikipedia article in the local language, Malayalam. Though the game is known by about 40 millions of people, there is no peer reviewed publication on it. So what would have constituted a valid reference for the English version of the website? What counts as a trusted source? Do videos count? Do the authors of the Wikipedia article have to random sample and analyze sources with the same care as a scientific publication would require? It seems then, the information age necessitates some rethinking of what constitutes a trusted source other than published works. Prabhala says:
“If we don’t have a more generous and expansive citation policy, the current one will prove to be a massive roadblock that you literally can’t get past. There is a very finite amount of citable material, which means a very finite number of articles, and there will be no more.”

Stefan remarked dryly they could just add a reference to Ind. J. Anth. Cult. [in Malayalam], and nobody would raise an eyebrow. Among physicists this is, tongue-in-cheek, known as “proof by reference to inaccessible literature” (typically to some obscure Russian journal in the early 1950s). The point is, asking for references is useless if nobody checks even the existence of these references. Most journals do now have software that checks reference lists for accuracy and at the same time for existence. The same software will inevitably spit out a warning if you’re trying to reference a living review.

But to come back to Wikipedia: It strikes me as a philosophical conundrum, a reference work that insists on external references. Not only because some of these references may just not exist, but because with a continuously updated work, one can create circular references. Take as an example the paper “Moisture induced electron traps and hysteresis in pentacene-based organic thin-film transistors” by Gong Gu and Michael G. Kane, Appl. Phys. Lett. 92, 053305 (2008). (Sounds seriously scientific, doesn’t it?) Reference [13] cites Wikipedia as a source on fluorescent lamps. There is a paper published in J. Phys. B that cites Wikipedia as a source for the double-slit experiment, and a PRL that cites the Wikipedia entry on the rainbow. Taemin Kim Park found a total of 139 citations to Wikipedia in the fields of Physics and Astronomy in the Scopus database as of January 20112.

That citation of Wikipedia itself would not be a problem. But the vast majority of people who cite websites do not add the date on which they retrieved the site. More disturbingly, the book “World Wide Mind” that I read recently, had a few “references” to essays by mentioning they can easily be found searching for [keywords], totally oblivious to the fact that the results of this search changes by the day, depends on the person searching, and that websites move or vanish. (Proof by Google?)

While the risk for citation loops increases with frequently updated sources, it is not an entirely new phenomenon. A long practiced variant of the “proof by reference” is citing one’s own “forthcoming paper” (quite common if page restrictions don’t allow further elaboration), but in this forthcoming paper - if it comes forth - one references the earlier paper. After ten or so self-referencing papers one claims the problem solved and anybody who searches for the answer will give up in frustration. (See also: Proof by mutual reference.)

Maybe the Wikipedia entry on the octopus hoax is a hoax?

Take away message: References in the age of the internet are moving targets and tracing back citations can be tricky. Restricting oneselves to published works only leaves out a lot of information. Citation loops by referencing frequently updated websites can create alternate realities. But don’t worry, somewhere in the level 5 multiverse it’s as real as, say, the moon landing.

Have you cited or would you cite a Wikipedia article in a scientific publication? If you did, did you add a date?



1 And why isn't there a website where one can enter locations of fruit trees and bushes that nobody seems to harvest? Because where we live a lot of blackberries, cherries, plums, peas, and apples are just rotting away. It’s a shame, really.
2 From Park's paper, it is not clear how many of these articles citing Wikipedia were also about Wikipedia. The examples I mentioned were dug out by Stefan.

Tuesday, August 09, 2011

Condensed penguins

During my time in Canada, the coldest temperature I recall reading off the digital display on the way back home was -28°C. I couldn't help asking myself why did humans ever settle in such hostile environment (and wtf was I doing there). But if you think Canadians are though (and Germans wimps), hear the story of the Emperor Penguin (Aptenodytes forsteri), that lives in Antarctica.

The Emperor Penguin's adaption to the cold, which can drop down to -50°C during the Antarctic winter, is plainly amazing. Feathers, fat, and their ability to increase the metabolism rate at low temperatures allow the penguins to survive. Equally amazing, but also bizarre, is the Emperor Penguin's breeding behavior.

Penguin colonies up to some thousands have nesting areas inland that are, depending on the annual ice thickness, 50-120 km away from the edge of the pack ice. At the beginning of the Antarctic Winter, some time in March or April, the penguins get out of the water and travel to their nesting areas, mostly walking or sliding on the ice. After mating, the female lays a single egg in late May or early June and passes it on to the male for incubation while she walks back to the shore.

In an environment of ice, snow, and the occasional rock the penguins can't build nests, so they balance the egg on their feet in their brood pouch. And, since there isn't much fish to find on the pack ice, they don't eat. Yes, you read that correctly: They walk a hundred kilometers, the female lays an egg and walks back a hundred kilometers, while the male sits on the egg for another 2 months, during the Antarctic Winter, in the dark, without the female, and all that without eating a thing. By the time the egg hatches, the male has fasted for almost 4 months, lost half of his body weight, and hopes for the female to return because he has nothing to feed the chick. And then he still has to walk back to the shore so he doesn't starve. But hey, my husband assembled the baby cribs!

There's a great documentary, "The March of the Penguins," telling this story:



But the penguins know some physics too!

While a single penguin is able to maintain its core body temperature in the freezing cold, this costs a lot of energy which he can't afford during his Winter fast. So what the Emperor penguins do is they form huddles. The density of the huddles increases with falling air temperature. If the huddle gets very dense, the penguins are in a nearly hexagonal arrangement. In a study from about a decade ago, researchers glued measuring devices to some penguin's lower back. They found that the temperature inside huddles can reach as much as 37.5° (Gilbert et al, arXiv:q-bio/0701051v1 [q-bio.PE]).

The penguins in these huddles do not stand still, but they move in occasional small steps which have recently been subject of another study (Zitterbart et al, PLoS ONE 6(6): e20260). The researchers shot movies of the penguin huddles and tracked the position of the birds. As you will easily notice if you read the paper, David Zitterbart is a condensed matter physicist who compares the huddling penguins to particles with an attractive interaction and the tight huddling to a jamming transition in granular materials. Just that the penguins manage to prevent jamming by coordinated movements. The little steps of the penguins propagate through the huddle like density waves. They measured densities up to 21 birds per square meter.

According to their paper, the penguins' little steps have a three-fold benefit. One is that they help the packing to get denser, much the same way like tapping a bag with ground coffee. The second is that they move the whole huddle, allowing huddles to merge and adjust position and direction. The third one is a turnover of penguins in the huddle, moving those from the outside towards the warmer inside. Though one might argue that what actually is responsible for the turnover is not the little forward steps, but the penguins on the front leaving the huddle and joining it (or another huddle) on the back. But without the forward steps, the turnover would make the huddle move backward.

The below movie from Zitterbart et al's paper (Yes, we live in the age of Harry Potter, where paper has moving pictures!) shows a time lapse of the penguin huddling (actual time about 1h):



I was very confused about the penguin turnover because in all the Emperor Penguin huddles in "The March of the Penguin" and photos I had seen only penguin backs and could not for the hell of it figure out where the penguins are supposed to go if they are all facing towards each other. So I wrote an email to David Zitterbart who kindly explained that there's two different kind of huddles that have been observed. Those that they've described in their paper, which have a forward direction, and circular ones that I had seen images of. He writes that no one really knows how the circular ones work, but they hope to find out with the next experiment.

Friday, August 05, 2011

Rehumanized

This is the previously mentioned commentary on Mark Slouka’s article “Dehumanized: When math and science rule the school” Since the article is quite lengthy, I’ve added a brief summary.

Summary

In his article “Dehumanized,” Mark Slouka argues that the US education’s focus on math and science and the neglect of the humanities spell the demise of democracy. The American education’s “long running affair with math and science” is “obsessive, exclusionary” and “altogether unhealthy.” And that is because the ways of science are “often dramatically anti-democratic.” “There are many things,” Slouka writes “math and science do well, and some they don’t. And one of the things they don’t do well is democracy.”

Referring to a quote by Dennis Overbye that “Nobody was ever sent to prison for espousing the wrong value for the Hubble constant,” Slouka complains that “To maintain its “sustainable edge,” a democracy require its citizens to actually risk something, to test the limits of the acceptable… If the value you’re espousing is one that could never get anyone, anywhere, sent to prison, then strictly democratically speaking you’re useless.” Democratically useful are only humanists because “upsetting people is arguably the very purpose of the arts and perhaps of the humanities in general.” That is also the reason, Slouka explains, totalitarian societies are skimping the dangerously upsetting humanities: “Why would a repressive regime support a force superbly designed to resist it?”

The last thing his humanist colleagues should do, Slouka says, is to succumb to the capitalist’ demand of accountability and economic utility, and attempt to fit in by justifying their existence on the enemies’ terms. “In a visible world, the invisible does not compute… in a horizontal world of “information” readily convertible to product, the verticality of wisdom has no place.” And Slouka evidently thinks wisdom is in the domain of the humanities. The trend to math and science is “the victory of whatever can be quantified over everything that can’t” and clearly one that one should oppose.

Comments

Slouka sets out to make a case against neglect of the humanities in American education and ends up calling scientists the useless couch potatoes of democratic societies. But in his arguments he makes several leaps. Most importantly, he equates “the sciences” with “the scientists” and he mixes up the role of democracy in science and the role of science for democracy, two very different things.

The process of knowledge discovery in science is not democratic. It has never been, and I hope it never will be for it would be a disaster. It is useful to think of it from a system’s perspective. Scientific progress just doesn’t work by voting. I keep saying that it would be good if we had a better understanding of its working and what feedback mechanisms are beneficial, but we know that much. That scientific knowledge discovery doesn’t operate democratically however doesn’t mean scientists don’t understand democracy or its relevance. Science teaches you to look at the evidence, to search for causal relations, correlations, and to identify and fix problems. Scientists know about the limits of predictability and the inevitability of uncertainty. They know what that statistic means and how to read that figure. They know the value of checking the references and that of reasoned argumentation. (Well, we're all human ;-)) The evidence says women are safer drivers than men. Upsetting? Where would democracy be without scientists?

But yes, scientists aren’t the first to take it to the streets if the world doesn’t run as they think it should. The people you find in the streets, those who start a revolution and throw the stones, are in the majority young unemployed males. Something to do with hormones too I guess, I’m sure somebody somewhere wrote a paper about this. The people who like their jobs, they stay in the lab and crunch the numbers because, actually, the world never runs as they think it should, but isn’t it so damned pretty if you look at it through a microscope, telescope, or binocular HMD? So I guess what Slouka is saying then is that we need the humanities because people who don’t like their job are more likely to join that demonstration tomorrow?

Okay, I’m being unfair because I actually agree with Slouka that the trend towards measuring and quantifying everything including success and knowledge gain is unhealthy. The process of measurement itself disturbs the process it is supposed to help - a problem we have discussed several times on this blog. Though, according to Slouka, a scientist like me should be positive about this trend towards reliance on metrics. Considering how divided the scientific community is over the use of any such measure for scientific success, Slouka doesn’t seem to have bothered talking to his colleagues from the science departments.

Slouka’s main point was about the American education system, and he’d done better not to overgeneralize his argument. Having grown up in Germany, I can’t judge on the quality of the American education system. Clearly, you want to teach children how the society they live in works and that includes politics, history, economics as well as all aspects of human culture. Needless to say, many of these subjects are interrelated. The impression I got during my years in the USA is that many students there have little or no idea what democracy is or how it works, and even less so do they actually know what communism, socialism, and social democracy is – and what the differences. I talked to several people who actually thought consumerism is a form of democracy, and I vividly recall talking to one guy who thought Germany is socialistic. Such confusions explain a lot of nonsense I keep reading online and are certainly not helpful to informed decision making. I am not sure though how representative that impression is. Maybe the people who talk to me are just oddballs.

And isn’t it ironic Slouka is bemoaning the American educations’ failure to produce good citizens, since to some extend I own my school education’s focus on democratic values to the Americans of the last generation? The first time some US officer said to me “I’m just following orders,” I stood in shock, having be taught a million times since Kindergarten to never, ever, justify an action by referral to an order whose purpose I cannot explain and bring in line with my conscience. After several similar incidents, I thought that’s just me till somebody told me about their German friend who in reply to the same remark by an US officer uttered promptly “That’s what the Nazi’s said.” Which, even with German accent however, fell on deaf American ears. (And that hopefully explains why I give a shit about your so-called policies.)

The other day, I came across this article by Bruce Levine listing “8 Reasons Young Americans Don’t Fight Back: How the US Crushed Youth Resistance” which you might like or not like, but point 3 “Schools That Educate for Compliance and Not for Democracy” is interesting in the context of Slouka’s article. Levine lets us know that “Upon accepting the New York City Teacher of the Year Award on January 31, 1990, John Taylor Gatto upset many in attendance by stating: “The truth is that schools don’t really teach anything except how to obey orders.””

Taken together, Slouka makes some bad points and some good points, but he makes both badly. Trying to make a case for the value of good writing, Slouka asks “Could clear writing have some relation to clear thinking?” In reply to which I want to quote Niels Bohr: “Never express yourself more clearly than you are able to think.”

Monday, August 01, 2011

This and That

Some well-written and interesting paragraphs that I came across recently.
  • Steve Mirsky in Scientific American reports this amusing anecdote:

    I was reminded of preposterously precocious utterances by tiny tykes during a brief talk that string theorist Brian Greene gave at the opening of the 2011 World Science Festival in New York City on June 1. Greene said he sometimes wondered about how much information small children pick up from standard dinner-table conversation in a given home. He revealed that he got some data to mull over when he hugged his three-year-old daughter and told her he loved her more than anything in the universe, to which she replied, “The universe or the multiverse?”

  • Mark Slouka's article Dehumanized: When math and science rule the school leads a fundamentally flawed argument (which I might make content of a longer post), but is one of the most beautifully written texts I've come across lately. I particularly liked this part:
    Consider the ritual of addressing our periodic “crises in education.” Typically, the call to arms comes from the business community. We’re losing our competitive edge, sounds the cry. Singapore is pulling ahead. The president swings into action. He orders up a blue-chip commission of high-ranking business executives (the 2006 Commission on the Future of Higher Education, led by business executive Charles Miller, for example) to study the problem and come up with “real world” solutions.

    Thus empowered, the commission crunches the numbers, notes the depths to which we’ve sunk, and emerges into the light to underscore the need for more accountability. To whom? Well, to business, naturally. To whom else would you account? And that’s it, more or less. Cue the curtain.

  • And David Eagleman's article The Brain on Trial that argues for "a scientific approach to sentencing" gives the reader a lot to think about.
    Who you even have the possibility to be starts at conception. If you think genes don’t affect how people behave, consider this fact: if you are a carrier of a particular set of genes, the probability that you will commit a violent crime is four times as high as it would be if you lacked those genes. You’re three times as likely to commit robbery, five times as likely to commit aggravated assault, eight times as likely to be arrested for murder, and 13 times as likely to be arrested for a sexual offense. The overwhelming majority of prisoners carry these genes; 98.1 percent of death-row inmates do. These statistics alone indicate that we cannot presume that everyone is coming to the table equally equipped in terms of drives and behaviors.

    And this feeds into a larger lesson of biology: we are not the ones steering the boat of our behavior, at least not nearly as much as we believe. Who we are runs well below the surface of our conscious access, and the details reach back in time to before our birth, when the meeting of a sperm and an egg granted us certain attributes and not others. Who we can be starts with our molecular blueprints—a series of alien codes written in invisibly small strings of acids—well before we have anything to do with it. Each of us is, in part, a product of our inaccessible, microscopic history. By the way, as regards that dangerous set of genes, you’ve probably heard of them. They are summarized as the Y chromosome. If you’re a carrier, we call you a male.

Saturday, July 30, 2011

Interna

Lara and Gloria are now 7 months old. During the last month, they have made remarkable progress. Both can now roll over either which way, and they also move around by pushing and pulling. They have not yet managed to crawl, but since last week they can get on all fours, and I figure it's a matter of days till they put one knee in front of the other and say good-bye to immobility. They still need a little support to sit, but they do better every day.

While the twins haven't paid any attention to each other during the first months, now they don't pay attention to anything else. Gloria doesn't take any note of me if Lara is in the room, and Lara loses all interest in lunch if Gloria laughs next door. The easiest way to stop Gloria from crying is to place her next to her sister. However, if one leaves them unattended they often scratch and hit each other. I too am covered with bruises (but hey, it rattles if I hit mommy's head!), scratches (how do you cut nails on a hand that's always in motion?), and the occasional love bite that Lara produces by furiously sucking on my upper arm (yes, it is very tasty). Gloria is still magically attracted to cables, and Lara has made several attempts to tear down the curtains.

Lara and Gloria are now at German lesson 26: da-da, ch-ch, dei-dei-dei, aga-a-gaga. It is funny that they make all these sounds but haven't yet attempted to use them for communication. They just look at us with big eyes when we speak and remain completely silent. Though they seem to understand a few words like Ja, Nein, Gut, Milch. They also clearly notice if I speak English rather than German.

For the parental reading, this month I've enjoyed Ingrid Wickelgren's article "The Miracle of Birth is that Most of Us Figure Out How to Mother - More or Less." Quoting research that shows some brain is useful for parenting too, she writes:
"To take care of a baby's needs, mom needs to be able to juggle tasks, to prioritize on the fly, rapidly, repeatedly and without a lot of downtime... Mothering tests your attention span, ability to plan, prioritize, organize and reason as much as does a day at the office."

Well, it somewhat depends on what you used to do in that office of course. But yeah, I suppose some organization skills come in handy for raising twins. I won't lie to you though, singing children's rhymes isn't quite as intellectually stimulating as going with your colleague through the new computation. But Gloria always laughs when I read to her the titles of new papers on the arXiv.

On the downside, the Globe and Mail reported the other day on "Divorce, depression: The ugly side of twins," summing up "the infant treadmill":
"Cry. Breastfeed. Bottle-feed. Burp. Breast pump. Diaper. Swaddle. Ninety minutes of baby maintenance, then 90 minutes of trying to stay on top of sleep and domestic chores, then repeat. And so on."

Oh, wait, they forgot cleaning the bottles, doing the laundry, picking up baby because she's been spitting all over herself, washing baby, changing her clothes, changing bed sheets, putting baby back into bed, putting bottles into sterilizer, put laundry into dryer, take the other baby out of bed because she's been spitting... Indeed, that's pretty much how we spent the first months. But it gets better and thanks, we're all doing just fine.

You can also disregard all the above words and just watch the below video. And if you think they're cute, don't forget they'll get cuter for two more months, so check back ;-)


PS: Oh, and please excuse the green thing in the video. New software and I haven't yet really figured out how it works.

Thursday, July 28, 2011

Prediction is very difficult

Niels Bohr was a wise man. He once said: "Prediction is very difficult, especially about the future." That is especially true when it comes to predictions about future innovations, or the impact thereof.

In an article in "Bild der Wissenschaft" (online, but in German, here) about the field of so-called future studies, writer Ralf Butscher looked at some predictions made by the Fraunhofer Institute for Systems and Innovation Research (ISI) in 1998. The result is sobering: In most cases, their expert panel didn't even correctly predict the trends of already developed technologies over a time of merely a decade. They did for example predict the human genome would be sequenced by 2008. In reality, it was sequenced already in 2001. They did also predict that by 2007 a GPS-based toll-system for roads would be widely used (in Germany). For all I know no such system is on the horizon. To be fair, they said a few things that were about right, for example that beginning in 2004, flat screens would replace those with cathode-ray tubes. But by and large it seems little more than guesswork.

Don't get me wrong - it's not that I am dismissing future studies per se. It's just that when it comes to predicting innovations, history shows such predictions are mostly entertaining speculations. And then there are the occasional random hits.

I was reminded of this when I read an article by Peter Rowlett on "The unplanned impact of mathematics" in the recent issue of Nature. He introduces the reader to 7 fields of mathematics that, sometimes with centuries delay, found their use in daily life. It is too bad the article is access restricted, so let me briefly tell you what the 7 examples are. 1) The quaternions who are today used in algorithms for 3-d rotations in robotics and computer vision. 2) Riemannian geometry, today widely used in physics and plenty of applications that deal with curved surfaces. 3) The mathematics of sphere packing, used for data packing and submission. 4) Parrondo's paradox, used for example to model disease spreading. 5) Bernoulli's law of large numbers (or probability theory more broadly) and its use for insurance companies to reduce risk. 6) Topology, long thought to have no applications in the real world and its late blooming in DNA knotting and the detection of holes in mobile phone network coverage. (Note to reader: I don't know how this works. Note to self: Interesting, look this up.) 7) Fourier transform. There would be little electrodynamics and quantum mechanics without it. Applications are everywhere.

Rowlett has a call on his website, asking for more examples.

The same issue of Nature also has a commentary by Daniel Sarewitz on the NSF Criterion 2 and its update, according to which all proposals should provide a description of how they will advance national goals, for example economic competitiveness and national security. Sarewitz makes it brilliantly clear how absurd such a requirement is for many branches of research:
"To convincingly access how a particular research project might contribute to national goals could be more difficult than the proposed project itself."

And, worse, the requirement might actually hinder progress:
"Motivating researchers to reflect on their role in society and their claim to public support is a worthy goal. But to do so in the brutal competition for grant money will yield not serious analysis, but hype, cynicism and hypocrisy."
I fully agree with him. As I have argued in various earlier posts, the smartest thing to do is reducing pressure on researchers (time pressure, financial pressure, peer pressure, public pressure) and let them take what they believe is the way forward. And yes, many of them will not get anywhere. But there is nobody who can do a better job in directing their efforts than they themselves. The question is just what's the best internal evaluation system. It is puzzling to me, and also insulting, that many people seem to believe scientists are not interested in the well-being of the society they are part of, or are somehow odd people whose values have to be corrected by specific requirements. Truth is, they want to be useful as much as everybody else. If research efforts are misdirected, it is not a consequence of researchers' wrongheaded ideals, but of these clashing with strategies of survival in academia.

Sunday, July 24, 2011

Blablameter

"Vigorous writing is concise."
~William Strunk, The Elements of Style (1918)


Fun: Die Zeit writes about Bernd Wurm who studied communication science and got frustrated by the omnipresence of empty words in advertisements and press releases. So he developed a software, the "Blablameter," that checks a text for unnecessary words and awkward grammar that obscures content. The Blablameter ranks text on a scale from 0 to 1: the higher the "Bullshit Index," the more Blabla. You find the Blablameter online at www.blablameter.de; it also works for English input. In the FAQ, Wurm warns that the tool does not check a text for actual content and is not able to judge the validity of arguments, it is merely a rough indicator for writing style. He also explains that scientific text tends to score highly on the Blabla-index.

Needless to say, I couldn't resist piping some abstracts of papers into the website. Here's the results, starting with the no-nonsense writing:


And yes, I did pipe in some text from this blog. My performance seems to have large fluctuations, but is mostly acceptable.

Did you come across anything with a Blabla-Index smaller than 0.08 or larger than 0.66?

Friday, July 22, 2011

Do cell phones cause tinnitus?

Forget about cancer caused by cell phones, what about that ringing in your ear? About 10-15% of the adult population suffer from chronic tinnitus. I've had a case of tinnitus after a back injury. Luckily it vanished after 3 months, but since then I'm very sympathetic to people who go nuts from that endless ringing in their ear. A recent study by a group of researchers from Vienna now looked into the correlation between cell phone use and tinnitus. The results are published in their paper Tinnitus and mobile phone use, Occup Environ Med 2010;67:804-808. It's not open access, but do not despair because I'll tell you what they did.

The researchers recruited a group of 100 sufferers that showed up in some hospital in Vienna. They only picked people for whom no physiological, psychological or medical reason for the onset of their tinnitus was found. They excluded for example patients with diseases of the middle ear, hypertension, and those medicated with certain drugs that are known to influence ear ringing. They also did hearing tests to exclude people with hearing loss, of which one might suspect that their tinnitus was noise induced. Chronic tinnitus was defined as lasting longer than 3 months. About one quarter of the patients had had it already longer than 1 year at the time of recruitment. 38 of the 100 found it distressing "most of the time," and 36 "sometimes." The age of the patients ranged from 16 to 80 years.

The researchers then recruited a control group of also 100 people that were matched to the sufferers in certain demographic factors, among others the age group, years of education and whether they lived in- or outside the city.

At the time the study was conducted (2004), 92% of the recruits used a cell-phone. (I suspect the use was strongly correlated with age, but no details on that in the paper). At the time of onset of their tinnitus, only 84% of the sufferers had used a cellphone, and another 17% had used it for less than a year at that time. The recruits, both sufferers and controls, were asked for their cell phone habits by use of a questionnaire. Statistical analysis showed one correlation at the 95% confidence level: for cellphone use longer than 4 years at the onset of tinnitus. In numbers: In the sufferer's group, the ratio between those who had used a cellphone never or less than one year to those who had used it more than 4 years was 34/33. In the control group it was 41/23.

They then discuss various possible explanations, such as the possibility that cell phone radiation affects the synthesis of nitric oxide in the inner ear, but also more banally that a "prolonged contrained posture" or "oral facial manoeuvres" affect the blood flow unfavorably. (Does chewing gum cause tinnitus?)

The result is just barely significant, i.e. just at the edge of the confidence interval. There's a 5% chance of that result happening just coincidentally by unlucky sampling. So the researchers conclude very carefully that "high intensity and long duration of mobile phone use might be associated with tinnitus." Note that it's "associated with" and not "caused by." Needless to say, if you Google "cell phones tinnitus" you'll find several pages incorrectly proclaiming that "The researchers concluded that long term use of a mobile phone is a likely cause of tinnitus," or that the "study suggests cell phones may cause a chronic ringing in the ears." If such a Google search lead you here, the study concludes nothing of that sort. Instead, the authors finish with saying that there "might" be a link and that the issue should be "explored further."

So, do cell phones cause tinnitus? Maybe. Should you stop sleeping with the phone under your pillow? Probably.

In any case, I was left wondering why they didn't ask for phone habits generally. I mean, if it's the posture or movements connected with calling, what's it matter if it's a cell phone or a landline?

Monday, July 18, 2011

Book review: World Wide Mind by Michael Chorost

World Wide Mind
The Coming Integration of Humanity, Machines, and the Internet
By Michael Chorost

Is it surprising that self-aware beings become increasingly aware of their self-awareness and start pushing the boundaries? The Internet, Google, iPhones and wifi on every street corner have significantly changed the way we interact, share information and solve problems. Meanwhile, neuroscientists have made dramatic progress in deciphering brain activity. They have developed devices that allow to type using thoughts instead of fingers and monkeys with brain implants have learned how to move a robot arm with their thoughts. These are two examples that Michael Chorost discusses in his book, and that he then extrapolates.

Chorost's extrapolation is a combination of these developments in communication and information technology and neuroscience: Direct brain-to-brain communication by thought transmitted via implants rather than by typed words, combined with wireless access to various soft- and hardware to supplement to our cognitive skills.

I agree with Chorost that this "World Wide Mind" is the direction we are drifting, and that the benefits can be huge. It is interesting though if you read the comments to my two earlier posts that many people seemed to be scared rather than excited by the idea, mumbling Borg-Borg-Borg to themselves. Is is refreshing and also curageous then that Michael Chorost in his book addresses the topic from a quite romantic viewpoint.

Chorost describes himself as a short, deaf, popular science writer. He wears a Cochlear implant that allows him to hear by electric stimulation of the auditory system (content of his previous book, which I however didn't read). Chorost started writing "World Wide Mind" single and finished as a married man. He writes about his search for a partner and what he learned along the way about communication and what today's communication on the internet is lacking. The ills produced by our presently incomplete and insatisfactory online culture he believes will be resolved if we overcome the limitations of this exchange. He does not share the pessimism Jaron Lanier put forward in his book "You are not a gadget". (He does however share Lanier's fondness of octupi and a link to this amazing video with the reader.)

In "World Wide Mind" Chorost wants to offer an outlook of what he believes is doable if today's technology is pushed forward hard enough. He focuses mostly on optogenetics, a recently florishing field of study that has allowed to modify some targeted neurons' genetic code such that their activity can be switched on and off by light signals (most famously, this optogenetically controlled mouse running circles in blue light). He also discusses what scientists have learned about the way our brains store and process input. Chorost then suggests that it seems doable to record each person's pattern of neuronal activity for certain impressions, sights, smells, views, words, emotions and so on (which he calls "cliques") and transmit them to be triggered by somebody else's implant in that person's brain where they would cause a less intense signal of the corresponding clique. That would then allow us, so the idea, to share literally everything.

Chorost offers some examples what consequences this would have that seem to me however quite bizarre. Improving on Google's flu tracker, he suggests that the brain implants could "detect the cluster of physical feelings related to flu -- achiness, tiredness, and so on -- and send them directly to the CDC." I'm imagining in the future we can track the spread of yeast infections via shared itchiness, thank you very much. Chorost also speculates that "The greater share of the World Wide Mind's bandwidht might be devoted to sharing dreams" (more likely it would be devoted to downloadable brain-sex), and that "linking the memory [of what happened at some place to the place] could be done very easily, via GPS." I'm not sure I'd ever sleep in a hotel room again.

He barely touches in one sentence on what to me is maybe the most appealing aspect of increased empathy, a bridging of the gap between the rich and the poor, both locally and globally, and his vision for science gives me the creeps for it would almost certainly stiffle originality and innovation due to a naive sharing protocol.

"World Wide Mind" is a very optimistic book. It is a little too optimistic in that Chorost spends hardly any time discussing potential problems. He has a few pages in which he acknowledges the question of viruses and shizophrenia, but every new technology has problems, he writes, and we'll be able to address them. The Borg, he explains, are scary to us because they lack empathy and erase the individual. A World Wide Mind, in contrast, would enhance individuality because better connectivity fosters specialization that eventually improves performance. Rather than turning us into Borg, "brain-to-brain technologies would be profoundly humanizing."

It is quite disappointing Chorost does not at all discuss the cognitive biases we know we have, and what protocols might prevent them from becoming amplified. Nor does he, more trivially, address the point that everybody has something to hide. Imagine you're ignoring a speed limit sign (not that I would ever do such a thing). How do you avoid this spreading through your network, ending up in a fine? Can you at all? And let's not mention that reportedly a significant fraction of the adult population cheats on their partner. Should we better wait for the end of monogamy before we move on with the brain implants? (It may be close than you think.) And, come to think of it, let's better wait for the end of the Catholic Church as well. Trivial as it sounds, these issues will be real obstacles in convincing people to adapt such a technology, so why didn't Chorost spend a measly paragraph on that?

Chorost's book is an easy read. On the downside, it lacks in detail and explanation. His explanation of MRI for example is one paragraph saying it's a big expensive thing with a strong magnet that "can change the orientation of specific molecules in a person's body, letting viewers see various internal structures clearly." And that's it. He also talks about neurotransmitters without ever explaining what that is, and you're unlikely to learn anything about neurons that you didn't already know. Yes, I can go and look up the details. But that's not what I buy a book for.

"World Wide Mind" sends unfortunately very unclear messages that render Chorost's arguments unconvincing. He starts out stressing that the brain's hardware is its software, and so it's quite sloppy he then later, when discussing whether the Internet is or might become self-aware, confuses the Internet with the World Wide Web. According to different analogies that he draws upon, blogs either "could be seen as a collective amygdala, in that they respond emotionally to events" and Google (he means the search protocol, not the company) "can be seen as forming a nascent forebrain" or some pages later it can be seen as an organ of an organism, or a caste of a superorganism.

Chorost also spends a lot of words on some crazy California workshop that he attended where he learned about the power of human touch (in other words, the workshop consisted of a bunch of people stroking each other), but then never actually integrates his newly found insights about the importance of skin-contact with the World Wide Mind. This left me puzzled because the brain-to-brain messaging he envisions is able to transfer one's own neuronal activity only, which means essentially rather than tapping on your friend's shoulder, you'd have to tap your own shoulder and send it to your friend. And Chorost does not make a very convincing case when he claims that we'd be easily able to distinguish somebody else's memory from our own because it would lack in details. He does that after he discussed in length our brains' tendency to "confabulation," the creation of a narrative for events that didn't happen or didn't make sense to protect our sense of causality and meaning, something he seems to have forgotten some chapters after explaining it.

In Summary: the book is very readable, entertaining and it is smoothly written. If you don't know much about the recent developments in neuroscience and optogenetics, it will be very interesting. The explanations are however quite shallow and Chorost's vision is not well worked out. On the pro-side, this gives you something to think about yourself, and the book requires with only 200 pages not a big time investment.

Undecided? You can read the prologue and 1st Chapter of the book here, and Chapter 4 here. Michael Chorost tweets and is on facebook.

Friday, July 15, 2011

Collective excitement

I woke up this morning to find my twitter account hacked, distributing spam. I'm currently reading Michael Chorost's new book “World Wide Mind” and if his vision comes true the day might be near when your praise of the frozen pizza leaves me wondering if your brain has been hacked. Book review will follow when I'm done reading. If the babies let me that is. Here, I just want to share an interesting extract.

On the risk of oversimplifying 150 pages, a “clique” is something like an element of the basis of your thoughts. Might be a thing, a motion, an emotion, a color, a number, and so on, like e.g. black, dog, running, scary... It's presumably encoded in some particular pattern of neurons firing in your brain, patterns that however are different from person to person. The idea is that instead of attempting brain-to-brain communication by directly linking neurons, you identify the pattern for these “cliques.” Once you've done that, a software can identify them from your neuronal activity and submit them to somebody else where they get translated into their respective neuronal activity.

In Chapter 10 on “The Future of Individuality,” Chorost speculates on the enhanced cognitive abilities of an interconnected World Wide Mind:
“[I]magine a far-flung group of physicists thinking about how to unify quantum mechanics and general relativity (the most important unsolved problem in physics). One of them has the germ of an "aha" idea, but it's just a teasing sensation rather than a verbally articulated thought. It evokes a sense of excitement that her [brain implant] can pick up. Many cliques in her brain would be activated, many of them subconsciously. The sensation of excitement alerts other physicists that something is up: they suddenly feel that sense of aha-ness themselves. The same cliques in their brains are activated, say these: unification problem, cosmological constant, black holes, Hawking radiation.

An apparent random assortment, but brains are good at finding patterns in randomness. New ideas often come from a fresh conjunction of old ones. In a group intimately familiar with a problem, the members don't need to do a whole lot of talking to understand each other. A few words are all that are needed to trigger an assortment of meaningful associations. Another physicist pushes those associations a little further in his own head, evoking more cliques in the group. Another goes to his keyboard and types out a few sentences that capture it, which go out to the group; perhaps they are shared on a communally visible scratch pad. The original physicist adds a few more sentences. Fairly rapidly, the new idea is sketched out in a symbology of words and equations. If it holds up, the collective excitement draws in more physicists. If it doesn't, the group falls apart and everyone goes back to what they were doing. This is brainstorming, but it's facilitated by the direct exchange of emotions and associations within the group, and it can happen at any time or place.”

Well, I'm prone to like Chorost's book as you can guess if you've read my last year's post It comes soon enough in which I wrote “The obvious step to take seems to me not trying to get a computer to decipher somebody's brain activity, but to take the output and connect it as input to somebody else. If that technique becomes doable and is successful, it will dramatically change our lives.”

Little did I know how far technology has come already, as I now learned from Chorost's book. In any case, the above example sounds like right out of my nightmare. I'm imagining, whenever one of my quantum gravity friends has an aha-moment we're all getting a remote-triggered adrenaline peak and jump all over it. We'd never sleep, brains would start fuming, we'd all go crazy in about no time. Even if you'd manage to dampen this out, the over-sharing of premature ideas is not good for progress (as I've argued many times before). Preemies need intensive care, they need it warm and quiet. A crowd's attention is the last thing they need. Sometimes it's not experience and knowledge of all the problems that helps one move forward, but lack thereof. Arthur C. Clarke put it very well in his First Law:
“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”

The distinguished scientist may be wrong, but he certainly will be able to state his opinion very clearly and indeed have a lot of good reasons for it. He still may be wrong in the end, but by then you might have given up thinking through the details. Skepticism and debunking is a central element of research. Unfortunately, one sometimes throws out the baby with the bathwater of bad ideas. “Collective excitement” based on a sharing of emotions doesn't seem like the best approach to science.

Sunday, July 10, 2011

Love to wonder

The July issue of 'Physik Journal' (the membership journal of the German Physical Society) has an interview with Jack Steinberger. Steinberger is an experimental particle physicist who in 1988 won the Nobelprize, with Leon Lederman and Melvin Schwartz, for his 1962 discovery of the muon neutrino. He is German born, but his family emigrated to the USA in 1934. Steinberger just celebrated his 90st birthday. What does a physicist do at the age of 90? Here's an excerpt from the interview (by Stefan Jorda):

You still come to your office at CERN every day?

I came by bike until last year, but then I fell and now I take the bus. I get up at five and arrive at half past six.

Every morning?

Not on Saturdays and Sundays. But I have nothing else to do. I read my email, then I go to the arXiv and look at the new papers in astrophysics. On the average, it's about 50 to 100, many of them are very bad. I read the abstracts, this takes one and a half hour, then I print those 5 to 10 that may be of interest to me. I try to understand them during the rest of the day. Then at 4pm I take the bus back home.

Since when are you interested in astrophysics?

In 1992 COBE detected the inhomogeneities in the cosmic microwave background, that was wonderful. It was a big challenge for me, as a particle physicist, to understand it, because one has to know general relativity and hydrodynamics. Back then I was still a little smarter and really tried to learn these things. Today I am interested for example in active galactic nuclei. The processes there are very complicated. I try to keep track, but there are many things I don't understand, and a lot simply is not understood.

(Any awkward English grammar is entirely the fault of my translation.)

Should I be lucky enough to live to the age of 90, that's how I would like to spend my days, following our ongoing exploration and increasing understanding of nature. Okay, maybe I would get up a little later. And on Saturday I'll bake a cake or two because my grand-grand children come for a visit. All nine of them.

"Men love to wonder, and that is the seed of science."
~Ralph Waldo Emerson

Tuesday, July 05, 2011

Getting cuter by the day...

If you've been wondering what age babies are the cutest, there's a scientific answer to that. Yes, there is. In the year 1979, Katherine A. Hildebrandt and Hiram E. Fitzgerald from the Department of Psychology at Michigan State University published the results of their study on "Adults' Perceptions of Infant Sex and Cuteness."

A totally representative group of about 200 American college students of child psychology were shown 60 chromatic photographs of infant faces: 5 male and 5 female each for six age levels (3, 5, 7, 9, 11, and 13 months). The babies were photographed by a professional photographer under controlled conditions when their facial expressions were judged to be relatively neutral, and the infants' shoulders were covered with a gray cape to hide clothing.

The study participants were instructed to rate the photos on a 5-point scale of cuteness (1: not very cute, 2: less cute than average, 3: average cuteness, 4: more cute than average, 5: very cute). The average rating was 2.75, ie somewhat less than averagely cute. The authors write that it's probably the selection of photos with neutral facial expressions and the grey cape which accounted for the students' overall perception as slightly less cute than average. And here's the plot of the results:
So, female cuteness peaks at 9 months.

For the above rating the participants were not told the gender of the child, but asked to guess it, which provided a 'perceived gender' assignment to each photo. In a second experiment, the participants were told a gender which however was randomly picked. It turned out that an infant perceived to be male but labeled female was perceived to be less cute than if it was labeled male. Thus the authors conclude that cuter infants are more likely to be perceived as female, and cuteness expectations are higher on females.

Partly related, Gloria just woke up:

Friday, July 01, 2011

Why do we live in 3+1 dimensions? Another attempt.

It's been a while since we discussed the question why we experience no more and no less than 3 spatial dimensions. The last occasion was a paper by Karch and Randall who tried to shed some light on the issue, if not very convincingly. Now there's a new attempt on the arXiv:
    Spacetime Dimensionality from de Sitter Entropy
    By Arshad Momen and Rakibur Rahman
    arXiv: 1106.4548 [hep-th]

    We argue that the spontaneous creation of de Sitter universes favors three spatial dimensions. The conclusion relies on the causal-patch description of de Sitter space, where fiducial observers experience local thermal equilibrium up to a stretched horizon, on the holographic principle, and on some assumptions about the nature of gravity and the constituents of Hawking/Unruh radiation.

What they've done is to calculate the entropy and energy of the Unruh radiation in a causal patch of any one observer in a de Sitter spacetime with d spatial dimensions. Holding the energy fixed and making certain assumptions about the degrees of freedom of the particles in the radiation, the entropy has a local maxium at d= 2.97 spacelike dimensions, a minimum around 7 and goes to infinity for large d. Since the authors restrict themselves to d less or equal to 10, this seems to say for a given amount of energy the entropy is maximal for 3 spacelike dimensions. Assuming that the universe is created by quantum tunneling, the probability for creation is larger the larger the entropy, thus it would be likely then that we live in a space with 3 dimensions.

To calculate the entropy one needs a cutoff the value of which is fixed by matching it to the entropy associated with the de Sitter horizon, so that's where the holographic principle becomes important.

Not only is it crucial that they add an upper bound on the number of dimensions by some other argument, their counting also depends on the number of particles and the dimensions they can propagate into. They are assuming only massless particles contribute, and these are photons and gravitons. Massive particles even with small masses, the authors write, are "unacceptable" because then the cutoff could be sensitive to the Hubble parameter. By considering only photons and gravitons as massless particles they are assuming the standard model. So even in the best case one could say they have a correlation between the number of dimensions and the particle content. Also, in braneworld models the total number of spatial dimensions isn't necessarily the one determining degrees of freedom at low energy; a possibility the authors explicitly say they're not considering.

Thus, as much as I'd like to see a good answer to the question, I'm not very convinced by this one either.

Wednesday, June 29, 2011

This and That

Some random things that caught my attention recently: