Big Thinkers Thinking Big Thoughts About Computing, Part II

Update 10/30/06: Steve Lohr at the New York Times wrote a story about the symposium nicely weaving together the different presentations.

As mentioned in the previous post, The National Academies’ Computer Science and Telecommunications Board (CSTB) held a symposium to commemorate the Board’s 20th anniversary. Cameron blogged about the first half of the symposium, and I am writing about the afternoon sessions.

In general, these sessions were perhaps more technical and more speculative than the morning sessions. The theme of the event was what might be the state of computing in 2016, so this is understandable. As this is a policy weblog, I’m not going to go into great technical detail, especially when the presentations made will be placed on the CSTB website sometime soon. They will give a better technical explanation than I can in this forum.

A theme emerged in some speakers presentations. As the amount of information collected and accessible increases, how that information is processed becomes more and more important. In turn, the ability to do more processing allows the retrieval of information previously thought lost or otherwise unavailable. The storage and use of that information will have policy implications, as suggested in the USACM privacy recommendations released earlier this year.

Shree Nayar, TC Chang Chaired Professor of Computer Science, Columbia University, Computational Cameras: Redefining the Image

Professor Nayar spoke at length about the state of high-level cameras today, and what can be done with them. By combining new optics with powerful computers, many kinds of new images are possible. Future advances will focus both on improving optics and processing. Examples include cameras that provide a full 360 degree field of view (processing helps straighten out the images and provide multiple perspectives). Picture resolution is increasing rapidly, with a level of detail such that reflections off the cornea of the eye can be captured with high detail. With the right level of processing, you can reconstruct the perspective of the person in the photo – you can see what they saw. This can also be done with video and older photos (as old as the early days of photography). One possible area of application is in human-computer interfaces. Other areas where increased processing capability help are in the use of spherical cameras (3-D photography), concave lens cameras (depth photos), and dynamic range (handling low light or high contrast images). The information is there, it’s a question of retreiving it. Future cameras will continue to leverage the combination of flexible optics with powerful computation (and a likely decrease in size).

Prabhakar Raghavan, Head of Yahoo! Research, Online Interactive Media in 2016

Dr. Raghavan spoke mainly about information integration – how do we optimize computers so that we can spend less time searching for stuff and more time doing stuff. The two important parts of information integration are information extraction and schema normalization (standard groupings). Since data storage and communication decrease in price faster than computation or processing, advances in using information tend to lag behind advances in gathering or communicating information. There are many competing interests in the development and use of online interactive media. Where storage and processing are concerned, there’s a tension between needs of basic research (access to data), consumers (privacy of user data), and content and infrastructure providers (need to retain search logs and other data to maintain services or provide products).

Dr. Raghavan also noted that we have few research groups like PARC today, though PARC was instrumental in the personal computer and the human-computer interface research community. Reflecting one of Dr. Wladawsky-Berger’s recommendations, Dr. Raghavan emphasized a need for ethnographers, economists, and other social scientists to help create/design media experiences.

Richard Karp, University Professor, Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, The Algorithmic Nature of Scientific Theories

Professor Karp’s presentation served as a reminder that there is an underlying body of theory that helps push forward research and applications in the field. Algorithms will still matter, theories will still influence practice, and continue to raise fundamental questions about proof and secure transactions. He goes so far as to state the long run impact of algorithms (the origin of many successful companies) will outstrip the impact of Moore’s Law (perhaps the boldest statement of the symposium).

Karp also noted the influence of computational science and computational theory on many other scientific disciplines, such as biology and statistical physics. Mathematics and computer science continue to have strong interconnections, especially with respect to proofs. Computing can be useful in disciplines where they are looking at macroscopic properties of large systems that result from local interactions. New interactions with social sciences include game theory, economics, studies of large databases or involving large databases.

Rick Rashid, Senior Vice President, Research, Microsoft Corporation, Future Tense

Dr. Rashid’s presentation was perhaps the most speculative of the day, or at least the most future-oriented. He started by reviewing the success and failure of some predictions of the past sometimes things are too early (the wallet PC or interactive televisions).

Some of the things he suggested for the future:

  • LCD displays and whiteborads will cross in price point, allowing for advanced display possibilities.
  • Provable systems – a set of properties we can prove to be true/false.
  • Mega-servers or data farms will move from computers in a building to buildings that are computers.
  • Human scale storage – a human blackbox that records everything you see/hear/do – for a terabyte or two. The same could be done for appliances or pets, increasing the capability to observe things and help with amnesia/memory loss, especially when coupled with sensors.
  • Increasingly blurred lines between real and virtual images – larger pictures spliced together from many pictures, combined with compositing software. You can create incredibly detailed images (gigapixels), and process to a level where things can disappear seamlessly from photos. Virtual viewpoints and tours can be created from this material.
  • Streaming intelligence – constant processing and analysis of data that will inform modelling and expectations.

Dana Cuff, (Professor, Urban Planning), Mark Hansen (Associate Professor, Statistics), and Jerry Kang (Professor, Law), University of California, Los Angeles, Out of the Woods: Urban Sensing in the Coming Decade

This was a rapid tag-team presentation on the state of research in sensor networks and the implications (policy, legal, and scientific) of the increasing use of such networks in urban envrionments. The rapid deployment of RFID tags would be a good example of such a network. A large part of the concern comes from the increasing participation of non-scientists in maintaining the sensors or collecting and using the resulting data. This raises concerns about possible unreliable or non-objective research. It’s harder to ensure uniform experimental practice in such an environment. Other concerns include intermediate liability for the use of stored data, privacy (including issues related to a person surveilling themself), the interaction of intellectual property law and the non-market forces that dominate these networks, and making sure there are sufficient participants to make these networks practical.

Ultimately an important step toward making these sensor networks and their resulting data useful (in some kind of data commons) is some kind of service to sort the data – a datablog. Now, while no major data sensor net has yet to go active, provides the best time to develop effective policy to help manage such networks.

Eric Schmidt, CEO, Google

Mr. Schmidt provided the keynote for the symposium. As he sees it, we are in the midst of a transition of architectures – moving from client/server architectures toward Web 2.0 – perhaps something pretty close to the networked computing he argued for in the mid-1990s. However, the convergence arguments that helped power networked computing discussions are wrong – we will have several devices with a similar view, not a single device that would do everything.

This change is happening now because new tech companies are shifting from computer infrastructures to enabling new kinds of businesses and services. But he warned that the community has never really dealt with the compounding of changes in computing or the consequences of those changes. Search is key to this transition (reflecting one of Dr. Raghavan’s points). The return of supercomputers helps power this change. Of course there are limitations, but they are being addressed – as are difficulties with other architectures.

Improving the use and utility of this new architecture will require a few new things. First is a more complete understanding of how people are consuming information. The habits of teenagers are suggesting significant changes in the use and reuse of information. Paying for these services is also critical to success of this architecture. One thing the community needs to do is talk more with government about their influence in these areas, including privacy boundaries, concerns over individual rights, and regulations about the Internet (Google is concerned about a loss of net neutrality, but the company sees the resolution of this issue as a business negotiation or changing of the business model). Keep in mind that each country will decide these questions in a different fashion. Whether concerned with policy or technical questions, approaching them from the perspective of the end user is a fruitful approach.

Schmidt also spoke about IT and education – specifically K-12 education. He found it odd that there was no equivalent to MIT’s OpenCourseWare for K-12 – where best practices and curricula were available online. With procurement models for IT mirroring the multi-year cycles for textbooks, there are plenty of challenges for businesses to enter this market. It’s more likely a good effort for NGOs to take action.

This was a long day, but ultimately a useful event for helping shape thinking about the future. For additional information, please consult the symposium website for more information. Presentation material should be available soon.

This entry was posted in Events, Innovation. Bookmark the permalink. Comments are closed, but you can leave a trackback: Trackback URL.