20231111
20230711
![]() |
Discovery Channel Special |
When our laboratory came up short on research grants, I personally went to the President himself when fate brought us together at the same time and place on his first trip overseas after election. The Commander in Chief impressed me with both his immediate familiarity with our work and and his enthusiasm in response to my earnest request for $1M in budget that had been allocated for national security priority scientific research topics through a grant newly created by Clinton with his last act in office, the 2001 National Nanotechnology Initiative.

My living arrangements at the lab consisted of an expansive three-bedroom master suite with fully-stocked library, typically reserved for visiting prime ministers, senators, and senior diplomats. The quarters were shared with none other than the project's principal scientific investigator: Hugo de Garis. On a midsummer's afternoon as the two of us strolled on a random walk through the sprawling estate and lush wooded grounds surrounding the manor, immersed in a passionate debate on the long-term promise and perils of superintelligence, the ever-eccentric


.jpg)

From manned spaceflight training at NASA and on the summit of a volcano to field expeditions employing state-of-the-art sensors in rough desert terrain, I worked in collaboration to lead multidisciplinary teams of scientists, researchers, special forces domain experts and engineers to field test next-generation technologies in austere environments. Each of these initiatives was undertaken with a singular aim to make a profound and positive impact on the future of humanity, for our children, our children’s children, and the generations yet to come.
“Our deepest fear is not that we are inadequate. Our deepest fear is that we are powerful beyond measure. It is our light — not our darkness — that most frightens us. We oft ask ourselves: ‘Who am I to be brilliant, gorgeous, talented, fabulous?’ Actually, who are we not to be? You are a child of God. Your playing small here doesn’t serve the world. There’s nothing enlightened about shrinking so other people won’t feel insecure around you. We were born to make manifest the glory of God that lies within us. It’s not just in some of us. It’s in everyone—and as we let our light shine, we unconsciously give other people permission to do the same. As we are liberated from our own fear, our presence automatically liberates others. ”
Astronaut Scientists for Hire
Open New Research Frontier in Space
At a joint press conference Monday with Virgin Galactic at the Next-Generation Suborbital Researchers Conference, XCOR, SwRI, and others, Astronauts for Hire Inc. announced the selection of its third class of commercial scientist-astronaut candidates to conduct experiments on suborbital flights.
Among those selected was Singularity University inaugural program faculty advisor, teaching fellow, and track chair Christopher Altman, a graduate fellow at the Kavli Institute of Nanoscience, Delft University of Technology.
“The selection process was painstaking,” said Astronauts for Hire Vice President and Membership Chair Jason Reimuller. “We had to choose a handful of applicants who showed just the right balance of professional establishment, broad technical and operational experience, and a background that indicates adaptability to the spaceflight environment.”
“With the addition of these new members to the organization, Astronauts for Hire has solidified its standing as the premier provider of scientist-astronaut candidates,” said its President Brian Shiro. “Our diverse pool of astronauts in training represent more than two dozen disciplines of science and technology, speak sixteen languages, and hail from eleven countries. We can now handle a much greater range of missions across different geographic regions.”
Altman completed Zero-G and High-Altitude Physiological Training under the Reduced Gravity Research Program at NASA Ames Research Center in Silicon Valley and NASA Johnson Space Center in Houston, and was tasked to represent NASA Ames at the joint US-Japan space conference (JUSTSAP) and the launch conference (PISCES) for an astronaut training facility on the slopes of Mauna Kea Volcano on the Big Island of Hawaii.
Altman’s research has been highlighted in international press and publications including Discover Magazine and the International Journal of Theoretical Physics. He was recently awarded a fellowship to explore the foundations and future of quantum mechanics at the Austrian International Akademie Traunkirchen with Anton Zeilinger.
“The nascent field of commercial spaceflight and the unique conditions afforded by space and microgravity environments offer exciting new opportunities to conduct novel experiments in quantum entanglement, fundamental tests of spacetime, and large-scale quantum coherence,” said Altman.
20230710
Two hundred years ago, if you suggested people would comfortably travel in flying machines—reaching any destination in the world in a few hours time—instantly access the world's cumulative knowledge by speaking to something the size of a deck of cards, or travel to the Moon, or Mars, you'd be labeled a madman. The future is bound only by our imagination.
Someday very soon we may look back on the world today in much the same way as we did those who lived in the time of Galileo, when everyone lived with such great certainty and self-assuredness that the Earth was flat and the center of the universe. The time is now. A profound shift in consciousness is long overdue. The universe is teeming with life. We're all part of the same human family.
This is potentially the single most momentous moment in our known history—not just for us as a nation, or us as humanity, but as a planet. The technological leaps that could come from developing open contact with nonhuman intelligence are almost beyond our comprehension. That is why this is such a monumental moment for us as a collective whole. It could literally change every single one of the eight billion human lives on this planet.
We stand on the shores of a vast cosmic ocean, with untold continents of possibility to explore. As we continue forwards in our collective journey, scaling the cosmic ladder of evolution, progressing onwards, expanding our reach outwards in the transition to a multiplanetary species—Earth will soon be a destination, not just a point of origin.
20230708
Overview

– Don Williams
We dream. It's what makes us who we are. Down to our bones, to the core of our cellular memories, passed down through eons of survival, expansion, exploration and growth. The instinct to build, the drive to seek beyond what we know. It's in our DNA.
We cross the oceans, we conquer the skies, unyielding, relentless in our pursuit of the farthest frontiers, venturing forth to launch ourselves outwards and find a new home for our descendants among the stars.
Yesterday's impossible becomes today's greatest achievement—and tomorrow's routine. The heavens beckon, parting open. A new generation of innovators and explorers heeds the call, the invitation to take our species further: not just to visit, but to stay.
Keynote on the Future of Space Exploration, broadcast live to 108 cities around the world
Carpe futurum.
– Christopher Altman
20230705
20230704
How Artificial Intelligence Could Save the Day: The threat of extinction and how AI can help protect biodiversity in Nature
The Conversation If we’re going to label AI an ‘extinction risk’, we need to clarify how it could happen As a professor of AI, I am also in favor of reducing any risk, and prepared to work on it personally. But any statement worded in such a way is bound to create alarm, so its authors should probably be more specific and clarify their concerns.
CNN AI industry and researchers sign statement warning of ‘extinction’ risk Dozens of AI industry leaders, academics and even some celebrities called for reducing the risk of global annihilation due to artificial intelligence, arguing that the threat of an AI extinction event should be a top global priority.
NYT AI Poses ‘Risk of Extinction,’ Industry Leaders Warn Leaders from OpenAI, Google DeepMind, Anthropic and other A.I. labs warn that future systems could be as deadly as pandemics and nuclear weapons.
BBC Experts warn of artificial intelligence risk of extinction Artificial intelligence could lead to the extinction of humanity, experts — including the heads of OpenAI and Google Deepmind — have warned.
PBS Artificial intelligence raises risk of extinction, experts warn Scientists and tech industry leaders, including high-level executives at Microsoft and Google, issued a new warning Tuesday about the perils that artificial intelligence poses to humankind.
NPR Leading experts warn of a risk of extinction from AI Experts issued a dire warning on Tuesday: Artificial intelligence models could soon be smarter and more powerful than us and it is time to impose limits to ensure they don't take control over humans or destroy the world.
CBC Artificial intelligence poses 'risk of extinction,' tech execs and experts warn More than 350 industry leaders sign a letter equating potential AI risks with pandemics and nuclear war.
CBS AI could pose "risk of extinction" akin to nuclear war and pandemics, experts say Artificial intelligence could pose a "risk of extinction" to humanity on the scale of nuclear war or pandemics, and mitigating that risk should be a "global priority," according to an open letter signed by AI leaders such as Sam Altman of OpenAI as well as Geoffrey Hinton, known as the "godfather" of AI.
USA Today AI poses risk of extinction, 350 tech leaders warn in open letter CAIS said it released the statement as a way of encouraging AI experts, journalists, policymakers and the public to talk more about urgent risks relating to artificial intelligence.
CNBC AI poses human extinction risk on par with nuclear war, Sam Altman and other tech leaders warn Sam Altman, CEO of ChatGPT-maker OpenAI, as well as executives from Google’s AI arm DeepMind and Microsoft were among those who supported and signed the short statement.
Wired Runaway AI Is an Extinction Risk, Experts Warn A new statement from industry leaders cautions that artificial intelligence poses a threat to humanity on par with nuclear war or a pandemic.
Forbes Geoff Hinton, AI’s Most Famous Researcher, Warns Of ‘Existential Threat’ From AI The alarm bell I’m ringing has to do with the existential threat of them taking control,” Hinton said Wednesday, referring to powerful AI systems. “I used to think it was a long way off, but I now think it's serious and fairly close.
The Guardian Risk of extinction by AI should be global priority, say experts Hundreds of tech leaders call for world to treat AI as danger on par with pandemics and nuclear war.
The Associated Press Artificial intelligence raises risk of extinction, experts say in new warning Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Al Jazeera Does artificial intelligence pose the risk of human extinction? Tech industry leaders issue a warning as governments consider how to regulate AI without stifling innovation.
The Atlantic We're Underestimating the Risk of Human Extinction An Oxford philosopher argues that we are not adequately accounting for technology's risks—but his solution to the problem is not for Luddites.
Sky News AI is similar extinction risk as nuclear war and pandemics, say industry experts The warning comes after the likes of Elon Musk and Prime Minister Rishi Sunak also sounded significant notes of caution about AI in recent months.
80,000 hours The Case for Reducing Existential Risk Concerns of human extinction have started a new movement working to safeguard civilisation, which has been joined by Stephen Hawking, Max Tegmark, and new institutes founded by researchers at Cambridge, MIT, Oxford, and elsewhere.
The Washington Post AI poses ‘risk of extinction’ on par with nukes, tech leaders say Dozens of tech executives and researchers signed a new statement on AI risks, but their companies are still pushing the technology
TechCrunch OpenAI’s Altman and other AI giants back warning of advanced AI as ‘extinction’ risk In a Twitter thread accompanying the launch of the statement, CAIS director Dan Hendrycks expands on the aforementioned statement, naming “systemic bias, misinformation, malicious use, cyberattacks, and weaponization” as examples of “important and urgent risks from AI — not simply risk of extinction.”