TECHNOLOGY AND DEVELOPMENT
These Captivating Drone Images Show Unexpected and Bizarre Scenes
Drone sales have soared to new heights in recent years. Only a few short years ago the prospect would have seemed unlikely for the average civilian, back when the only talk of these unmanned aerial vehicles was for military purposes. But these days, drones are increasingly being used for recreational purposes, and most now come equipped with high-quality cameras, making it all the easier to snap those hard to get to pictures from captivatingly impossible angles. Drones are also now being used by retailers, hospitals, law enforcement and even sporting events to capture angles not easily accessed from ground view. But sometimes drones capture the most bizarre and unexpected scenes from our everyday lives that a normal camera just cannot achieve. Read here for an exclusive view into some of the world’s most astonishing images captured by drones.
Watch slide show:
https://www.icepop.com/astonishing-phot ... ng-drones/
Drone sales have soared to new heights in recent years. Only a few short years ago the prospect would have seemed unlikely for the average civilian, back when the only talk of these unmanned aerial vehicles was for military purposes. But these days, drones are increasingly being used for recreational purposes, and most now come equipped with high-quality cameras, making it all the easier to snap those hard to get to pictures from captivatingly impossible angles. Drones are also now being used by retailers, hospitals, law enforcement and even sporting events to capture angles not easily accessed from ground view. But sometimes drones capture the most bizarre and unexpected scenes from our everyday lives that a normal camera just cannot achieve. Read here for an exclusive view into some of the world’s most astonishing images captured by drones.
Watch slide show:
https://www.icepop.com/astonishing-phot ... ng-drones/
When Technology Takes Revenge
While runaway cars and vengeful stitched-together humans may be the stuff of science fiction, technology really can take revenge on us. Seeing technology as part of a complex system can help us avoid costly unintended consequences. Here’s what you need to know about revenge effects.
***
By many metrics, technology keeps making our lives better. We live longer, healthier, richer lives with more options than ever before for things like education, travel, and entertainment. Yet there is often a sense that we have lost control of our technology in many ways, and thus we end up victims of its unanticipated impacts.
Edward Tenner argues in Why Things Bite Back: Technology and the Revenge of Unintended Consequences that we often have to deal with “revenge effects.” Tenner coined this term to describe the ways in which technologies can solve one problem while creating additional worse problems, new types of problems, or shifting the harm elsewhere. In short, they bite back.
Although Why Things Bite Back was written in the late 1990s and many of its specific examples and details are now dated, it remains an interesting lens for considering issues we face today. The revenge effects Tenner describes haunt us still. As the world becomes more complex and interconnected, it’s easy to see that the potential for unintended consequences will increase.
Thus, when we introduce a new piece of technology, it would be wise to consider whether we are interfering with a wider system. If that’s the case, we should consider what might happen further down the line. However, as Tenner makes clear, once the factors involved get complex enough, we cannot anticipate them with any accuracy.
Neither Luddite nor alarmist in nature, the notion of revenge effects can help us better understand the impact of intervening with complex systems But we need to be careful. Although second-order thinking is invaluable, it cannot predict the future with total accuracy. Understanding revenge effects is primarily a reminder of the value of caution and not of specific risks.
***
Types of revenge effects
There are four different types of revenge effects, described here as follows:
1. Repeating effects: occur when more efficient processes end up forcing us to do the same things more often, meaning they don’t free up more of our time. Better household appliances have led to higher standards of cleanliness, meaning people end up spending the same amount of time—or more—on housework.
2. Recomplicating effects: occur when processes become more and more complex as the technology behind them improves. Tenner gives the now-dated example of phone numbers becoming longer with the move away from rotary phones. A modern example might be lighting systems that need to be operated through an app, meaning a visitor cannot simply flip a switch.
2. Regenerating effects: occur when attempts to solve a problem end up creating additional risks. Targeting pests with pesticides can make them increasingly resistant to harm or kill off their natural predators. Widespread use of antibiotics to control certain conditions has led to be resistant strains of bacteria that are harder to treat.
3. Rearranging effects: occur when costs are transferred elsewhere so risks shift and worsen. Air conditioning units on subways cool down the trains—while releasing extra heat and making the platforms warmer. Vacuum cleaners can throw dust mite pellets into the air, where they remain suspended and are more easily breathed in. Shielding beaches from waves transfers the water’s force elsewhere.
***
Recognizing unintended consequences
The more we try to control our tools, the more they can retaliate.
Revenge effects occur when the technology for solving a problem ends up making it worse due to unintended consequences that are almost impossible to predict in advance. A smartphone might make it easier to work from home, but always being accessible means many people end up working more.
Things go wrong because technology does not exist in isolation. It interacts with complex systems, meaning any problems spread far from where they begin. We can never merely do one thing.
Tenner writes: “Revenge effects happen because new structures, devices, and organisms react with real people in real situations in ways we could not foresee.” He goes on to add that “complexity makes it impossible for anyone to understand how the system might act: tight coupling spreads problems once they begin.”
Prior to the Industrial Revolution, technology typically consisted of tools that served as an extension of the user. They were not, Tenner argues, prone to revenge effects because they did not function as parts in an overall system like modern technology. He writes that “a machine can’t appear to have a will of its own unless it is a system, not just a device. It needs parts that interact in unexpected and sometimes unstable and unwanted ways.”
Revenge effects often involve the transformation of defined, localized risks into nebulous, gradual ones involving the slow accumulation of harm. Compared to visible disasters, these are much harder to diagnose and deal with.
Large localized accidents, like a plane crash, tend to prompt the creation of greater safety standards, making us safer in the long run. Small cumulative ones don’t.
Cumulative problems, compared to localized ones, aren’t easy to measure or even necessarily be concerned about. Tenner points to the difference between reactions in the 1990s to the risk of nuclear disasters compared to global warming. While both are revenge effects, “the risk from thermonuclear weapons had an almost built-in maintenance compulsion. The deferred consequences of climate change did not.”
Many revenge effects are the result of efforts to improve safety. “Our control of the acute has indirectly promoted chronic problems”, Tenner writes. Both X-rays and smoke alarms cause a small number of cancers each year. Although they save many more lives and avoiding them is far riskier, we don’t get the benefits without a cost. The widespread removal of asbestos has reduced fire safety, and disrupting the material is often more harmful than leaving it in place.
***
Not all effects exact revenge
A revenge effect is not a side effect—defined as a cost that goes along with a benefit. The value of being able to sanitize a public water supply has significant positive health outcomes. It also has a side effect of necessitating an organizational structure that can manage and monitor that supply.
Rather, a revenge effect must actually reverse the benefit for at least a small subset of users. For example, the greater ease of typing on a laptop compared to a typewriter has led to an increase in carpal tunnel syndrome and similar health consequences. It turns out that the physical effort required to press typewriter keys and move the carriage protected workers from some of the harmful effects of long periods of time spent typing.
Likewise, a revenge effect is not just a tradeoff—a benefit we forgo in exchange for some other benefit. As Tenner writes:
If legally required safety features raise airline fares, that is a tradeoff. But suppose, say, requiring separate seats (with child restraints) for infants, and charging a child’s fare for them, would lead many families to drive rather than fly. More children could in principle die from transportation accidents than if the airlines had continued to permit parents to hold babies on their laps. This outcome would be a revenge effect.
***
In support of caution
In the conclusion of Why Things Bite Back, Tenner writes:
We seem to worry more than our ancestors, surrounded though they were by exploding steamboat boilers, raging epidemics, crashing trains, panicked crowds, and flaming theaters. Perhaps this is because the safer life imposes an ever increasing burden of attention. Not just in the dilemmas of medicine but in the management of natural hazards, in the control of organisms, in the running of offices, and even in the playing of games there are, not necessarily more severe, but more subtle and intractable problems to deal with.
While Tenner does not proffer explicit guidance for dealing with the phenomenon he describes, one main lesson we can draw from his analysis is that revenge effects are to be expected, even if they cannot be predicted. This is because “the real benefits usually are not the ones that we expected, and the real perils are not those we feared.”
Chains of cause and effect within complex systems are stranger than we can often imagine. We should expect the unexpected, rather than expecting particular effects.
While we cannot anticipate all consequences, we can prepare for their existence and factor it into our estimation of the benefits of new technology. Indeed, we should avoid becoming overconfident about our ability to see the future, even when we use second-order thinking. As much as we might prepare for a variety of impacts, revenge effects may be dependent on knowledge we don’t yet possess. We should expect larger revenge effects the more we intensify something (e.g., making cars faster means worse crashes).
Before we intervene in a system, assuming it can only improve things, we should be aware that our actions can do the opposite or do nothing at all. Our estimations of benefits are likely to be more realistic if we are skeptical at first.
If we bring more caution to our attempts to change the world, we are better able to avoid being bitten.
https://fs.blog/2020/09/revenge-effects/
While runaway cars and vengeful stitched-together humans may be the stuff of science fiction, technology really can take revenge on us. Seeing technology as part of a complex system can help us avoid costly unintended consequences. Here’s what you need to know about revenge effects.
***
By many metrics, technology keeps making our lives better. We live longer, healthier, richer lives with more options than ever before for things like education, travel, and entertainment. Yet there is often a sense that we have lost control of our technology in many ways, and thus we end up victims of its unanticipated impacts.
Edward Tenner argues in Why Things Bite Back: Technology and the Revenge of Unintended Consequences that we often have to deal with “revenge effects.” Tenner coined this term to describe the ways in which technologies can solve one problem while creating additional worse problems, new types of problems, or shifting the harm elsewhere. In short, they bite back.
Although Why Things Bite Back was written in the late 1990s and many of its specific examples and details are now dated, it remains an interesting lens for considering issues we face today. The revenge effects Tenner describes haunt us still. As the world becomes more complex and interconnected, it’s easy to see that the potential for unintended consequences will increase.
Thus, when we introduce a new piece of technology, it would be wise to consider whether we are interfering with a wider system. If that’s the case, we should consider what might happen further down the line. However, as Tenner makes clear, once the factors involved get complex enough, we cannot anticipate them with any accuracy.
Neither Luddite nor alarmist in nature, the notion of revenge effects can help us better understand the impact of intervening with complex systems But we need to be careful. Although second-order thinking is invaluable, it cannot predict the future with total accuracy. Understanding revenge effects is primarily a reminder of the value of caution and not of specific risks.
***
Types of revenge effects
There are four different types of revenge effects, described here as follows:
1. Repeating effects: occur when more efficient processes end up forcing us to do the same things more often, meaning they don’t free up more of our time. Better household appliances have led to higher standards of cleanliness, meaning people end up spending the same amount of time—or more—on housework.
2. Recomplicating effects: occur when processes become more and more complex as the technology behind them improves. Tenner gives the now-dated example of phone numbers becoming longer with the move away from rotary phones. A modern example might be lighting systems that need to be operated through an app, meaning a visitor cannot simply flip a switch.
2. Regenerating effects: occur when attempts to solve a problem end up creating additional risks. Targeting pests with pesticides can make them increasingly resistant to harm or kill off their natural predators. Widespread use of antibiotics to control certain conditions has led to be resistant strains of bacteria that are harder to treat.
3. Rearranging effects: occur when costs are transferred elsewhere so risks shift and worsen. Air conditioning units on subways cool down the trains—while releasing extra heat and making the platforms warmer. Vacuum cleaners can throw dust mite pellets into the air, where they remain suspended and are more easily breathed in. Shielding beaches from waves transfers the water’s force elsewhere.
***
Recognizing unintended consequences
The more we try to control our tools, the more they can retaliate.
Revenge effects occur when the technology for solving a problem ends up making it worse due to unintended consequences that are almost impossible to predict in advance. A smartphone might make it easier to work from home, but always being accessible means many people end up working more.
Things go wrong because technology does not exist in isolation. It interacts with complex systems, meaning any problems spread far from where they begin. We can never merely do one thing.
Tenner writes: “Revenge effects happen because new structures, devices, and organisms react with real people in real situations in ways we could not foresee.” He goes on to add that “complexity makes it impossible for anyone to understand how the system might act: tight coupling spreads problems once they begin.”
Prior to the Industrial Revolution, technology typically consisted of tools that served as an extension of the user. They were not, Tenner argues, prone to revenge effects because they did not function as parts in an overall system like modern technology. He writes that “a machine can’t appear to have a will of its own unless it is a system, not just a device. It needs parts that interact in unexpected and sometimes unstable and unwanted ways.”
Revenge effects often involve the transformation of defined, localized risks into nebulous, gradual ones involving the slow accumulation of harm. Compared to visible disasters, these are much harder to diagnose and deal with.
Large localized accidents, like a plane crash, tend to prompt the creation of greater safety standards, making us safer in the long run. Small cumulative ones don’t.
Cumulative problems, compared to localized ones, aren’t easy to measure or even necessarily be concerned about. Tenner points to the difference between reactions in the 1990s to the risk of nuclear disasters compared to global warming. While both are revenge effects, “the risk from thermonuclear weapons had an almost built-in maintenance compulsion. The deferred consequences of climate change did not.”
Many revenge effects are the result of efforts to improve safety. “Our control of the acute has indirectly promoted chronic problems”, Tenner writes. Both X-rays and smoke alarms cause a small number of cancers each year. Although they save many more lives and avoiding them is far riskier, we don’t get the benefits without a cost. The widespread removal of asbestos has reduced fire safety, and disrupting the material is often more harmful than leaving it in place.
***
Not all effects exact revenge
A revenge effect is not a side effect—defined as a cost that goes along with a benefit. The value of being able to sanitize a public water supply has significant positive health outcomes. It also has a side effect of necessitating an organizational structure that can manage and monitor that supply.
Rather, a revenge effect must actually reverse the benefit for at least a small subset of users. For example, the greater ease of typing on a laptop compared to a typewriter has led to an increase in carpal tunnel syndrome and similar health consequences. It turns out that the physical effort required to press typewriter keys and move the carriage protected workers from some of the harmful effects of long periods of time spent typing.
Likewise, a revenge effect is not just a tradeoff—a benefit we forgo in exchange for some other benefit. As Tenner writes:
If legally required safety features raise airline fares, that is a tradeoff. But suppose, say, requiring separate seats (with child restraints) for infants, and charging a child’s fare for them, would lead many families to drive rather than fly. More children could in principle die from transportation accidents than if the airlines had continued to permit parents to hold babies on their laps. This outcome would be a revenge effect.
***
In support of caution
In the conclusion of Why Things Bite Back, Tenner writes:
We seem to worry more than our ancestors, surrounded though they were by exploding steamboat boilers, raging epidemics, crashing trains, panicked crowds, and flaming theaters. Perhaps this is because the safer life imposes an ever increasing burden of attention. Not just in the dilemmas of medicine but in the management of natural hazards, in the control of organisms, in the running of offices, and even in the playing of games there are, not necessarily more severe, but more subtle and intractable problems to deal with.
While Tenner does not proffer explicit guidance for dealing with the phenomenon he describes, one main lesson we can draw from his analysis is that revenge effects are to be expected, even if they cannot be predicted. This is because “the real benefits usually are not the ones that we expected, and the real perils are not those we feared.”
Chains of cause and effect within complex systems are stranger than we can often imagine. We should expect the unexpected, rather than expecting particular effects.
While we cannot anticipate all consequences, we can prepare for their existence and factor it into our estimation of the benefits of new technology. Indeed, we should avoid becoming overconfident about our ability to see the future, even when we use second-order thinking. As much as we might prepare for a variety of impacts, revenge effects may be dependent on knowledge we don’t yet possess. We should expect larger revenge effects the more we intensify something (e.g., making cars faster means worse crashes).
Before we intervene in a system, assuming it can only improve things, we should be aware that our actions can do the opposite or do nothing at all. Our estimations of benefits are likely to be more realistic if we are skeptical at first.
If we bring more caution to our attempts to change the world, we are better able to avoid being bitten.
https://fs.blog/2020/09/revenge-effects/
Should You Choose Your Baby’s Eye Color?
Nobel Prize Winner Dr. Jennifer Doudna talks to Kara Swisher about the power — and pitfalls — of CRISPR gene-editing technology
CRISPR-Cas9 is the kind of scientific breakthrough that could change human evolution. Scientists call it “genetic scissors” — a tool that snips DNA with powerful and scary precision. As Dr. Jennifer Doudna, the co-developer of the gene-editing technology, explains, scientists can now edit the genomes of living organisms “like you might edit a Word document.”
Dr. Doudna and her collaborator, Dr. Emmanuelle Charpentier, won the Nobel Prize in Chemistry this year. Their pioneering research could revolutionize cancer treatment. Some fear it could also be used to create designer babies.
So what does this technology mean for how we live — and die? How will potential profit complicate the incentives of scientists? And just because we can more precisely “edit” life, should we?
Listen to the podcast and read the transcript at:
https://www.nytimes.com/2020/10/22/opin ... 778d3e6de3
Nobel Prize Winner Dr. Jennifer Doudna talks to Kara Swisher about the power — and pitfalls — of CRISPR gene-editing technology
CRISPR-Cas9 is the kind of scientific breakthrough that could change human evolution. Scientists call it “genetic scissors” — a tool that snips DNA with powerful and scary precision. As Dr. Jennifer Doudna, the co-developer of the gene-editing technology, explains, scientists can now edit the genomes of living organisms “like you might edit a Word document.”
Dr. Doudna and her collaborator, Dr. Emmanuelle Charpentier, won the Nobel Prize in Chemistry this year. Their pioneering research could revolutionize cancer treatment. Some fear it could also be used to create designer babies.
So what does this technology mean for how we live — and die? How will potential profit complicate the incentives of scientists? And just because we can more precisely “edit” life, should we?
Listen to the podcast and read the transcript at:
https://www.nytimes.com/2020/10/22/opin ... 778d3e6de3
Become an online privacy expert
In light of a change to its terms of service, the popular WhatsApp messaging service this week lost millions of its users, who migrated to other applications like Signal and Telegram, both claiming to offer better privacy. What are the risks involved in using such applications and how can we mitigate them?
It is our responsibility, as consumers, to think carefully about the data we give to companies.
The recent publication of a new Apple privacy policy has caused many of us to revisit what they are comfortable with from a data privacy perspective. The reality of the new WhatsApp policy is that it only affects the way you can interact with companies, does not require the collection of new data, nor does it affect the content of your conversations.
While many of us may consider switching to alternative services, this is not always easy because many of your friends and colleagues are not yet on these new networks. Especially when our friends' typology encompasses different age groups, this may become more difficult since each generation tends to prefer one particular service over another.
What we can do in these cases is to consider our own approach to data privacy, which can be divided into three broad categories: what we share, how we share and the tools we use.
If we first consider what we share, we must ask ourselves the following questions:
- When sharing our data with companies, do we need to share all the information requested?
- When we share content or thoughts with others, do we trust them or the system we use to keep that information safe?
- Ultimately, would we care if the information we share with companies, friends and contacts was made public?
When considering what we share with companies, it becomes critical to understand the implications. Although there is a need for companies to comply with data privacy rules (GDPR - General Data Privacy Regulation), as we have seen, many succumb to the harmful tactics of hackers. Thus, it becomes our responsibility as consumers of Internet services to think twice about the data we provide. Even more important is to understand the rights we give to companies that store our data, especially when their service is free. We have become too accommodating when signing up for free trials of free products or services on the Internet without thinking about what we are giving in return when we sign up. We wouldn't do it in the physical world,
The change in WhatsApp policy has been misrepresented in the media, but it has allowed many of us to reflect on what we consider to be an appropriate use of our data. The question remains: what are we willing to do about our privacy?
Are we going to stop using services like WhatsApp, Facebook, Instagram and other social platforms? Are we really able to do that?
Some tips to keep in mind when considering your own online privacy:
- Digitally audit the services and applications you use and don't use. If you no longer use the service, sign out of all communications and ask companies to remove your data.
- Think carefully about what you share in social media chat groups or other networks, and whether you would feel comfortable having that content presented to a wider audience.
- Understand that each visit to a website creates a digital footprint that advertisers could access.
- To protect your privacy, use tools to mask your use, such as a VPN (Virtual Private Network) and consider using a password manager to generate and manage different passwords on different websites.
Privacy must be a right that each of us has to defend as we become increasingly dependent on the Internet and digital services. We need to remain vigilant and experienced to ensure our right to privacy as technology continues to evolve.
Source: https://the.ismaili/global/news/feature ... vacy-savvy
https://the.ismaili/portugal/tornar-se- ... ade-online
In light of a change to its terms of service, the popular WhatsApp messaging service this week lost millions of its users, who migrated to other applications like Signal and Telegram, both claiming to offer better privacy. What are the risks involved in using such applications and how can we mitigate them?
It is our responsibility, as consumers, to think carefully about the data we give to companies.
The recent publication of a new Apple privacy policy has caused many of us to revisit what they are comfortable with from a data privacy perspective. The reality of the new WhatsApp policy is that it only affects the way you can interact with companies, does not require the collection of new data, nor does it affect the content of your conversations.
While many of us may consider switching to alternative services, this is not always easy because many of your friends and colleagues are not yet on these new networks. Especially when our friends' typology encompasses different age groups, this may become more difficult since each generation tends to prefer one particular service over another.
What we can do in these cases is to consider our own approach to data privacy, which can be divided into three broad categories: what we share, how we share and the tools we use.
If we first consider what we share, we must ask ourselves the following questions:
- When sharing our data with companies, do we need to share all the information requested?
- When we share content or thoughts with others, do we trust them or the system we use to keep that information safe?
- Ultimately, would we care if the information we share with companies, friends and contacts was made public?
When considering what we share with companies, it becomes critical to understand the implications. Although there is a need for companies to comply with data privacy rules (GDPR - General Data Privacy Regulation), as we have seen, many succumb to the harmful tactics of hackers. Thus, it becomes our responsibility as consumers of Internet services to think twice about the data we provide. Even more important is to understand the rights we give to companies that store our data, especially when their service is free. We have become too accommodating when signing up for free trials of free products or services on the Internet without thinking about what we are giving in return when we sign up. We wouldn't do it in the physical world,
The change in WhatsApp policy has been misrepresented in the media, but it has allowed many of us to reflect on what we consider to be an appropriate use of our data. The question remains: what are we willing to do about our privacy?
Are we going to stop using services like WhatsApp, Facebook, Instagram and other social platforms? Are we really able to do that?
Some tips to keep in mind when considering your own online privacy:
- Digitally audit the services and applications you use and don't use. If you no longer use the service, sign out of all communications and ask companies to remove your data.
- Think carefully about what you share in social media chat groups or other networks, and whether you would feel comfortable having that content presented to a wider audience.
- Understand that each visit to a website creates a digital footprint that advertisers could access.
- To protect your privacy, use tools to mask your use, such as a VPN (Virtual Private Network) and consider using a password manager to generate and manage different passwords on different websites.
Privacy must be a right that each of us has to defend as we become increasingly dependent on the Internet and digital services. We need to remain vigilant and experienced to ensure our right to privacy as technology continues to evolve.
Source: https://the.ismaili/global/news/feature ... vacy-savvy
https://the.ismaili/portugal/tornar-se- ... ade-online
The Coming Technology Boom
Politics is grim but science is working.
A few months ago, the economic analyst Noah Smith observed that scientific advance is like mining ore. You find a vein you think is promising. You take a risk and invest heavily. You explore it until it taps out.
The problem has been that over the last few decades only a few veins have really been paying off and changing lives. Discoveries in information technology have obviously been massive — the internet and the smartphone. Thanks in part to public investment, clean energy innovation has been fast and plentiful. The price of solar modules has declined by 99.6 percent since 1976.
But life-altering breakthroughs, while still significant, are fewer than they once were. If you were born in 1900 and died in 1970, you lived from the age of the horse-drawn carriage to the era of a man on the moon. You saw the widespread use of electricity, air-conditioning, aviation, the automobile, penicillin, and so much else. But if you were born in 1960 and lived until today, the driving and flying experience would be safer, but otherwise the same, and your kitchen, aside from the microwave, is basically unchanged.
In 2011, the economist Tyler Cowen published a prescient book, “The Great Stagnation,” exploring why scientific advance was slowing down. Peter Thiel complained that we wanted flying cars, but we got Twitter.
But this technological lull may be ending. Suddenly a lot of smart people are writing about many veins that look promising. The first and most obvious is vaccines. The amazing fact about Covid-19 vaccines is that Moderna scientists had designed the first one by Jan. 13, 2020. They had the vaccine before many people even thought the disease was a threat.
It’s not only a new vaccine but also a new kind of vaccine. The mRNA vaccines will help us teach our bodies to fight pathogens more effectively and could lead to breakthroughs in combating all sorts of diseases. For example, researchers have hope for mRNA cancer vaccines, which wouldn’t prevent cancer, but could help your body fight some forms.
In energy, geothermal breakthroughs are generating tremendous excitement. As David Roberts notes in an excellent explainer in Vox, the molten core of the earth is about 10,000 degrees Fahrenheit, roughly the same temperature as the sun. If we could tap 0.1 percent of the energy under the earth’s surface we could supply humanity’s total energy needs for two million years.
Engineers are figuring out how to mine the heat in the nonporous rock beneath the surface. As Roberts writes, “If its more enthusiastic backers are correct, geothermal may hold the key to making 100 percent clean electricity available to everyone in the world.”
This is not even to mention fusion. In one of those stories that felt epochal when you read it, my Times colleague Henry Fountain reported last September on how M.I.T. researchers had designed a compact nuclear reactor that should work. China currently has an experimental thermonuclear reactor that is reaching 270 million degrees Fahrenheit.
It feels like autonomous vehicles have been three years away for the last 10 years. But sooner or later they will arrive. Waymo has already started a driverless rides service in Phoenix — like Uber and Lyft, but with nobody in the front seat.
Meanwhile, in the electric car sector, Toyota is developing a vehicle that can go 310 miles on one charge and can charge from zero to full in 10 minutes.
One could go on: artificial intelligence; space exploration seems to be heating up; a variety of anti-aging technologies are being pursued; on Wednesday The Times reported on an anti-obesity drug. There’s even lab-grown meat. This is meat grown from animal cells that would enable us to enjoy steaks and Chicken McNuggets without actually slaughtering cows and chickens.
Obviously, all these veins are not going to pay off, but what if we gradually created a world with clean cheap energy, driverless cars and more energetic productive years in our lives?
On the plus side, global productivity would surge. What economists call total factor productivity has been grinding along with 0 to 2 percent increases for years. But a series of breakthroughs could keep productivity surging. Our economy, and world, would feel very different.
On the negative side, the dislocations would be enormous, too. What happens to all those drivers? What happens to people who work on ranches if labs take a significant share of the market? The political difficulties will be complicated by the fact that the people who will profit from these high-tech industries tend to live in the highly educated blue parts of the country, while the old industry workers who would be displaced tend to live in the less educated red parts.
We would be riding the tiger of rapid change. The economy would grow faster but millions of people would have trouble finding a place in it. Universal basic income would become a red-hot topic.
Government investment has spurred a lot of this progress. Government would have to come up with aggressive ways to mitigate the shocks. But it is better to face the challenges of dynamism than the challenges of stasis. Life would be longer and healthier, energy would be cleaner and cheaper, there would be a greater sense of progress and wonder.
In a week of political gloom, I thought you’d like some good news.
https://www.nytimes.com/2021/02/11/opin ... 778d3e6de3
Politics is grim but science is working.
A few months ago, the economic analyst Noah Smith observed that scientific advance is like mining ore. You find a vein you think is promising. You take a risk and invest heavily. You explore it until it taps out.
The problem has been that over the last few decades only a few veins have really been paying off and changing lives. Discoveries in information technology have obviously been massive — the internet and the smartphone. Thanks in part to public investment, clean energy innovation has been fast and plentiful. The price of solar modules has declined by 99.6 percent since 1976.
But life-altering breakthroughs, while still significant, are fewer than they once were. If you were born in 1900 and died in 1970, you lived from the age of the horse-drawn carriage to the era of a man on the moon. You saw the widespread use of electricity, air-conditioning, aviation, the automobile, penicillin, and so much else. But if you were born in 1960 and lived until today, the driving and flying experience would be safer, but otherwise the same, and your kitchen, aside from the microwave, is basically unchanged.
In 2011, the economist Tyler Cowen published a prescient book, “The Great Stagnation,” exploring why scientific advance was slowing down. Peter Thiel complained that we wanted flying cars, but we got Twitter.
But this technological lull may be ending. Suddenly a lot of smart people are writing about many veins that look promising. The first and most obvious is vaccines. The amazing fact about Covid-19 vaccines is that Moderna scientists had designed the first one by Jan. 13, 2020. They had the vaccine before many people even thought the disease was a threat.
It’s not only a new vaccine but also a new kind of vaccine. The mRNA vaccines will help us teach our bodies to fight pathogens more effectively and could lead to breakthroughs in combating all sorts of diseases. For example, researchers have hope for mRNA cancer vaccines, which wouldn’t prevent cancer, but could help your body fight some forms.
In energy, geothermal breakthroughs are generating tremendous excitement. As David Roberts notes in an excellent explainer in Vox, the molten core of the earth is about 10,000 degrees Fahrenheit, roughly the same temperature as the sun. If we could tap 0.1 percent of the energy under the earth’s surface we could supply humanity’s total energy needs for two million years.
Engineers are figuring out how to mine the heat in the nonporous rock beneath the surface. As Roberts writes, “If its more enthusiastic backers are correct, geothermal may hold the key to making 100 percent clean electricity available to everyone in the world.”
This is not even to mention fusion. In one of those stories that felt epochal when you read it, my Times colleague Henry Fountain reported last September on how M.I.T. researchers had designed a compact nuclear reactor that should work. China currently has an experimental thermonuclear reactor that is reaching 270 million degrees Fahrenheit.
It feels like autonomous vehicles have been three years away for the last 10 years. But sooner or later they will arrive. Waymo has already started a driverless rides service in Phoenix — like Uber and Lyft, but with nobody in the front seat.
Meanwhile, in the electric car sector, Toyota is developing a vehicle that can go 310 miles on one charge and can charge from zero to full in 10 minutes.
One could go on: artificial intelligence; space exploration seems to be heating up; a variety of anti-aging technologies are being pursued; on Wednesday The Times reported on an anti-obesity drug. There’s even lab-grown meat. This is meat grown from animal cells that would enable us to enjoy steaks and Chicken McNuggets without actually slaughtering cows and chickens.
Obviously, all these veins are not going to pay off, but what if we gradually created a world with clean cheap energy, driverless cars and more energetic productive years in our lives?
On the plus side, global productivity would surge. What economists call total factor productivity has been grinding along with 0 to 2 percent increases for years. But a series of breakthroughs could keep productivity surging. Our economy, and world, would feel very different.
On the negative side, the dislocations would be enormous, too. What happens to all those drivers? What happens to people who work on ranches if labs take a significant share of the market? The political difficulties will be complicated by the fact that the people who will profit from these high-tech industries tend to live in the highly educated blue parts of the country, while the old industry workers who would be displaced tend to live in the less educated red parts.
We would be riding the tiger of rapid change. The economy would grow faster but millions of people would have trouble finding a place in it. Universal basic income would become a red-hot topic.
Government investment has spurred a lot of this progress. Government would have to come up with aggressive ways to mitigate the shocks. But it is better to face the challenges of dynamism than the challenges of stasis. Life would be longer and healthier, energy would be cleaner and cheaper, there would be a greater sense of progress and wonder.
In a week of political gloom, I thought you’d like some good news.
https://www.nytimes.com/2021/02/11/opin ... 778d3e6de3
Innovation, Not Trees. How Bill Gates Plans to Save the Planet.
He has billions to donate for crises from coronavirus to climate change, and more hope now that Trump is out of office.
Bill Gates is publishing a new book, “How to Avoid a Climate Disaster: The Solutions We Have and the Breakthroughs We Need.” In it, Mr. Gates, the Microsoft co-founder turned philanthropist, outlines a path to a zero emissions future. Apparently electric cars, vegan diets and tree-hugging won’t get us there. And neither will rockets.
In this episode of “Sway,” Kara asks Mr. Gates why the world should listen to a billionaire with a private plane when it comes to the environment. They also discuss tech regulation, vaccine timelines and whether underestimating Elon Musk is ever a good idea.
Listen to podcast and read the transcript at:
https://www.nytimes.com/2021/02/15/opin ... 778d3e6de3
He has billions to donate for crises from coronavirus to climate change, and more hope now that Trump is out of office.
Bill Gates is publishing a new book, “How to Avoid a Climate Disaster: The Solutions We Have and the Breakthroughs We Need.” In it, Mr. Gates, the Microsoft co-founder turned philanthropist, outlines a path to a zero emissions future. Apparently electric cars, vegan diets and tree-hugging won’t get us there. And neither will rockets.
In this episode of “Sway,” Kara asks Mr. Gates why the world should listen to a billionaire with a private plane when it comes to the environment. They also discuss tech regulation, vaccine timelines and whether underestimating Elon Musk is ever a good idea.
Listen to podcast and read the transcript at:
https://www.nytimes.com/2021/02/15/opin ... 778d3e6de3
America’s Brutal Racial History Is Written All Over Our Genes
Our country has struggled to reckon with the horrors of the past. Could DNA tests help?
The consumer genetics testing company 23andMe this month announced that it is going public through a merger with a company founded by the billionaire Richard Branson, in a deal that valued it at $3.5 billion. This was just the latest big deal for the industry: Last year, the Blackstone Group acquired a majority stake in Ancestry, a 23andMe competitor, for $4.7 billion.
These investments are happening as the possibilities contained in our DNA have become more tangible and immediate — both in terms of our aspirations for the future, and our understanding of the past.
The debate around race consuming America right now is coinciding with a technological phenomenon — at-home genetic testing kits — revealing many of us are not who we thought we were. Some customers of the major DNA testing companies, which collectively have sold 37 million of these kits, are getting results that surprise them.
Perhaps they or a parent was adopted or donor-conceived and never told, or their families hid their genetic ancestries as an escape from discrimination. Maybe Dad isn’t their dad, genetically speaking, or they have a sister they never knew about. Some people are discovering their ancestors were Black, or Jewish. Others are learning their African-American lineages contain more European ancestry than they thought.
Our country, riven by wounds old and new over centuries of racist mistreatment, hasn’t figured how to acknowledge the full horrors of the past and all the ways those horrors continue. The images from the Capitol Hill incursion drove that home: Violent white invaders were met with a more acquiescent police reception than peaceful Black Lives Matter protesters had months earlier; a rioter carried a Confederate flag through the Capitol building, while a noose hung outside.
Despite the reductionism that sometimes frames discussions of the “ethnicity estimates” that the genetic testing industry offers customers — Ancestry, for instance, is responsible for a disturbing ad relying on ethnic tropes and yoking genes to greatness, attributing a figure skater’s “grace” to her Asian heritage, and her “precision” to her Scandinavian roots — this moment may offer us an important opportunity to grapple with the blunt facts of our nation’s history. After all, to heal from the past, we first have to be willing to see it for what it was.
More...
https://www.nytimes.com/2021/02/16/opin ... 778d3e6de3
Our country has struggled to reckon with the horrors of the past. Could DNA tests help?
The consumer genetics testing company 23andMe this month announced that it is going public through a merger with a company founded by the billionaire Richard Branson, in a deal that valued it at $3.5 billion. This was just the latest big deal for the industry: Last year, the Blackstone Group acquired a majority stake in Ancestry, a 23andMe competitor, for $4.7 billion.
These investments are happening as the possibilities contained in our DNA have become more tangible and immediate — both in terms of our aspirations for the future, and our understanding of the past.
The debate around race consuming America right now is coinciding with a technological phenomenon — at-home genetic testing kits — revealing many of us are not who we thought we were. Some customers of the major DNA testing companies, which collectively have sold 37 million of these kits, are getting results that surprise them.
Perhaps they or a parent was adopted or donor-conceived and never told, or their families hid their genetic ancestries as an escape from discrimination. Maybe Dad isn’t their dad, genetically speaking, or they have a sister they never knew about. Some people are discovering their ancestors were Black, or Jewish. Others are learning their African-American lineages contain more European ancestry than they thought.
Our country, riven by wounds old and new over centuries of racist mistreatment, hasn’t figured how to acknowledge the full horrors of the past and all the ways those horrors continue. The images from the Capitol Hill incursion drove that home: Violent white invaders were met with a more acquiescent police reception than peaceful Black Lives Matter protesters had months earlier; a rioter carried a Confederate flag through the Capitol building, while a noose hung outside.
Despite the reductionism that sometimes frames discussions of the “ethnicity estimates” that the genetic testing industry offers customers — Ancestry, for instance, is responsible for a disturbing ad relying on ethnic tropes and yoking genes to greatness, attributing a figure skater’s “grace” to her Asian heritage, and her “precision” to her Scandinavian roots — this moment may offer us an important opportunity to grapple with the blunt facts of our nation’s history. After all, to heal from the past, we first have to be willing to see it for what it was.
More...
https://www.nytimes.com/2021/02/16/opin ... 778d3e6de3
The role of financial technology in shaping sustainable futures
Over the last decade, digitalisation has disrupted finance across developed and emerging markets, giving rise to an explosion of financial technology — otherwise known as “fintech” — startups and platforms, impacting every aspect of finance, starting with access, availability, and affordability.
Mobile payment platforms, like M-Pesa in Kenya, have turned mobile devices into transactional tools and are now used by over one billion people globally. In Kenya alone, M-Pesa transactions amounted to nearly half of the country’s GDP in 2018 while EcoCash transacted close to 90% of Zimbabwe’s GDP in 2019. In China, Alipay and WeChatPay, the most popular socialised payment services, counted more than 1.7 billion users in 2019. Everywhere around the world, people have been able to access much needed financial services thanks to fintech innovations.
As it turns out, the benefits of fintech extend well beyond financial inclusion. For example, fintechs have played a key role during the coronavirus pandemic, with e-commerce platforms enabling hundreds of millions of people in lockdown to buy essentials online, payment platforms channeling trillions of dollars of stimulus packages in cash transfers, and algorithmic lending facilitating financial support to small- and medium-sized business.
Furthermore, fintechs have been instrumental in unlocking access to high-cost utilities and infrastructure in both developed and developing economies, from pay-as-you-go models driving access to clean-energy through more affordable solar home systems across Africa, through to rent-to-own approaches allowing farmers in Asia to use and eventually acquire expensive agricultural equipment. Fintech innovations have also enabled Bangladeshi and Kenyan citizens to invest in large-scale infrastructure projects and have facilitated sharing economy models that are transforming the way we all think about mobility and manage office space.
But perhaps most importantly, it is in the area of climate and biodiversity that fintechs are yet to have their most significant impact. Awareness around the climate crisis and natural capital loss has dramatically increased in recent years and has triggered a broad range of institutional and policy developments, along with market innovations. While the Covid pandemic has taken centre stage in the short-term, the more fundamental crisis facing mankind has not lessened and will require all the ingenuity, knowledge, and resources that we collectively possess if we are to prevail as a species.
Current trends are pointing to the fact that fintechs are already playing an important role in protecting our environment: it would indeed be impossible to have $1 trillion of green bonds — which are bonds created to fund projects that have positive environmental and climate benefits — without fintech-enabled real-time data flows on use of proceeds. Carbon markets, which aim to reduce greenhouse gas emissions, would not exist without underlying digital infrastructure and fintech products.
In addition, emerging innovations in the digitalization of assets are increasingly facilitating citizen participation in climate adaptation and mitigation. For example, digital assets such as Cedar Coin and Carbon Coin are enabling individuals to contribute to reforestation and conservation efforts. And Forest, a flagship product developed by Alibaba’s Ant Group in China, has advanced a gamified approach to citizen engagement around virtual tree planting, which is matched with actual trees being planted in real life. Within two years, the platform has mobilised 500 million users and resulted in 200 million trees being planted.
These are just a few examples of what is possible today. One could easily imagine a world in which fintech innovations could support the greater good and help us overcome our most pressing challenges. Whether that happens is largely a matter of choice, incentives, and good governance.
Digital literacy is becoming increasingly important in our technology-driven world. To learn more about the developments shaping our future, and how to prepare for these, visit our collection of Digital Awareness articles on The Ismaili.
https://the.ismaili/global/news/feature ... le-futures
Over the last decade, digitalisation has disrupted finance across developed and emerging markets, giving rise to an explosion of financial technology — otherwise known as “fintech” — startups and platforms, impacting every aspect of finance, starting with access, availability, and affordability.
Mobile payment platforms, like M-Pesa in Kenya, have turned mobile devices into transactional tools and are now used by over one billion people globally. In Kenya alone, M-Pesa transactions amounted to nearly half of the country’s GDP in 2018 while EcoCash transacted close to 90% of Zimbabwe’s GDP in 2019. In China, Alipay and WeChatPay, the most popular socialised payment services, counted more than 1.7 billion users in 2019. Everywhere around the world, people have been able to access much needed financial services thanks to fintech innovations.
As it turns out, the benefits of fintech extend well beyond financial inclusion. For example, fintechs have played a key role during the coronavirus pandemic, with e-commerce platforms enabling hundreds of millions of people in lockdown to buy essentials online, payment platforms channeling trillions of dollars of stimulus packages in cash transfers, and algorithmic lending facilitating financial support to small- and medium-sized business.
Furthermore, fintechs have been instrumental in unlocking access to high-cost utilities and infrastructure in both developed and developing economies, from pay-as-you-go models driving access to clean-energy through more affordable solar home systems across Africa, through to rent-to-own approaches allowing farmers in Asia to use and eventually acquire expensive agricultural equipment. Fintech innovations have also enabled Bangladeshi and Kenyan citizens to invest in large-scale infrastructure projects and have facilitated sharing economy models that are transforming the way we all think about mobility and manage office space.
But perhaps most importantly, it is in the area of climate and biodiversity that fintechs are yet to have their most significant impact. Awareness around the climate crisis and natural capital loss has dramatically increased in recent years and has triggered a broad range of institutional and policy developments, along with market innovations. While the Covid pandemic has taken centre stage in the short-term, the more fundamental crisis facing mankind has not lessened and will require all the ingenuity, knowledge, and resources that we collectively possess if we are to prevail as a species.
Current trends are pointing to the fact that fintechs are already playing an important role in protecting our environment: it would indeed be impossible to have $1 trillion of green bonds — which are bonds created to fund projects that have positive environmental and climate benefits — without fintech-enabled real-time data flows on use of proceeds. Carbon markets, which aim to reduce greenhouse gas emissions, would not exist without underlying digital infrastructure and fintech products.
In addition, emerging innovations in the digitalization of assets are increasingly facilitating citizen participation in climate adaptation and mitigation. For example, digital assets such as Cedar Coin and Carbon Coin are enabling individuals to contribute to reforestation and conservation efforts. And Forest, a flagship product developed by Alibaba’s Ant Group in China, has advanced a gamified approach to citizen engagement around virtual tree planting, which is matched with actual trees being planted in real life. Within two years, the platform has mobilised 500 million users and resulted in 200 million trees being planted.
These are just a few examples of what is possible today. One could easily imagine a world in which fintech innovations could support the greater good and help us overcome our most pressing challenges. Whether that happens is largely a matter of choice, incentives, and good governance.
Digital literacy is becoming increasingly important in our technology-driven world. To learn more about the developments shaping our future, and how to prepare for these, visit our collection of Digital Awareness articles on The Ismaili.
https://the.ismaili/global/news/feature ... le-futures
Why Are Elon Musk and Jeff Bezos So Interested in Space?
We the people need to take more control of how we move into the brave new worlds beyond our planet.
Stunning images of planets Mars and Jupiter at:
https://www.nytimes.com/2021/02/26/opin ... 778d3e6de3
Kara Swisher
By Kara Swisher
Ms. Swisher covers technology and is a contributing opinion writer.
Feb. 26, 2021
A SpaceX Falcon 9 rocket carrying four NASA astronauts to the International Space Station set off from Cape Canaveral, Fla., in November.
A SpaceX Falcon 9 rocket carrying four NASA astronauts to the International Space Station set off from Cape Canaveral, Fla., in November.Credit...Joe Skipper/Reuters
Why do the world’s two richest men want to get off the planet so badly?
Elon Musk of Tesla and Jeff Bezos of Amazon have more than $350 billion in combined wealth and preside over two of the most valuable companies ever created. But when they’re not innovating on Earth, they have been focusing their considerable brain power on bringing a multiplanetary human habitat to reality.
For Mr. Musk, it’s through his other company, SpaceX, which has become an ever bigger player in the private space-technology arena. On top of satellite launches and other rocket innovations, the company announced it will send its first “all civilian” crew into orbit at the end of the year, in a mission called Inspiration4. SpaceX has already carried NASA astronauts to the International Space Station and is planning to transport more, as well as private astronauts, for a high price.
Most ambitiously, Mr. Musk has said that SpaceX will land humans on Mars by 2026. To do that, the private company will use a chunk of the close to $3 billion — including $850 million announced this week in a regulatory filing — that it has raised over the last year to finance this herculean effort.
While Mr. Musk might not be the first human to go to the red planet, he once told me that he wanted to die there, joking, “Just not on landing.”
Mr. Bezos, who is stepping down as chief executive of Amazon this year, is expected to accelerate his space-travel efforts through his company Blue Origin, whose tag line reads, in part, “Earth, in all its beauty, is just our starting place.”
Like SpaceX, Blue Origin is working on payload launches and reusable orbital launch vehicles, as well as on moon landing technology, to achieve what Mr. Bezos once called “low-cost access to space.” Blue Origin executives said recently that the company is close to blasting off into space with humans.
Mr. Bezos’ most extravagant notion, unveiled in 2019, is a vision of space colonies — spinning cylinders floating out there with all kinds of environments.
“These are very large structures, miles on end, and they hold a million people or more each,” he said, noting they are intended to relieve the stress on Earth and help make it more livable.
It’s probably good for space innovation that two billionaires are slugging it out and attracting all kinds of start-ups, investments and interest to the area. But all of their frantic aggression has been overshadowed of late by two spectacular efforts by NASA.
The two NASA missions delivered this week the kind of awe-inspiring moments that make one look up from the wretched news spewing out of our smartphones toward the stunning celestial beauty of the endless universe.
ImageA composite made from images sent by Perseverance rover shows the rim of Jezero Crater on the surface of Mars.
A composite made from images sent by Perseverance rover shows the rim of Jezero Crater on the surface of Mars.Credit...NASA, via Associated Press
The first was the batch of images from amazing high-definition cameras on the Perseverance rover, a car-size autonomous vehicle that touched down in the Jezero Crater on Mars last week. The photographs are so sharp that you can zoom in close enough to look at the holes in the rocks on the surface and even get a pretty good sense of the dirt itself. The larger panorama is just as arresting, a desert scene that is breathtakingly alien while also feeling quite familiar.
I found myself staring at the scenes for an hour, marveling that I can see the details of an elegant wind-carved boulder from a distance of 133.6 million miles. The $2.7 billion Mars mission includes a search for signs of ancient Martian life, sample-collecting and the flight of a helicopter called Ingenuity.
But the imagery from Mars was quickly topped by an even older NASA mission to Jupiter by the Juno space probe, which entered the planet’s orbit in 2016. It did some very close fly-bys recently that are yielding perhaps the most stunning photos that we’ve ever seen of the planet.
Image
NASA’s Juno mission captured this color-enhanced image of Jupiter’s cloud tops.Credit...NASA, via Agence France-Presse — Getty Images
Color-enhanced by citizen scientists from publicly available NASA data and images, the images show delicately swirling jet streams that look like a painting of quicksilver created by some space-faring artistic genius. I wish I could be riding on Juno myself to see up close the vast cyclones gather and the angry clouds seethe.
It was just a year ago that Juno sent back another image of Jupiter, looking like the best marble ever made, which NASA titled “Massive Beauty.”
Image
The Juno mission captured this look at the southern hemisphere of Jupiter on Feb. 17, 2020.Credit...JPL-Caltech/SwRI/MSSS, via NASA
Perhaps the fact that life on Earth feels so precarious at this moment explains, at least in part, why Mr. Bezos and Mr. Musk want to find ways to get off it.
But it’s important to keep in mind that these two men are just two voices among billions of earthlings. It is incumbent on the rest of us to take more control of how we are going to move into the brave new worlds beyond our own gem of a planet.
We have handed over so much of our fate to so few people over the last decades, especially when it comes to critical technology. As we take tentative steps toward leaving Earth, it feels like we are continuing to place too much of our trust in the hands of tech titans.
Think about it: We the people invented the internet, and the tech moguls pretty much own it. And we the people invented space travel, and it now looks as if the moguls could own that, too.
Let’s hope not. NASA, and other government space agencies around the world, need our continued support to increase space exploration.
I get that we have enormous needs on this planet, and money put toward space travel could instead be spent on improving lives here on Earth. But the risk to our planet from climate change means we have to think much bigger.
Keep in mind a hidden message that NASA engineers put onto the descent parachute of the Perseverance rover. The colors on the chute were a binary code that translates into “Dare mighty things.”
Coming from across the vast and empty universe, it was a message not meant just for Mr. Bezos and Mr. Musk. It was actually meant for all of us.
Stunning photos at:
https://www.nytimes.com/2021/02/26/opin ... 778d3e6de3
We the people need to take more control of how we move into the brave new worlds beyond our planet.
Stunning images of planets Mars and Jupiter at:
https://www.nytimes.com/2021/02/26/opin ... 778d3e6de3
Kara Swisher
By Kara Swisher
Ms. Swisher covers technology and is a contributing opinion writer.
Feb. 26, 2021
A SpaceX Falcon 9 rocket carrying four NASA astronauts to the International Space Station set off from Cape Canaveral, Fla., in November.
A SpaceX Falcon 9 rocket carrying four NASA astronauts to the International Space Station set off from Cape Canaveral, Fla., in November.Credit...Joe Skipper/Reuters
Why do the world’s two richest men want to get off the planet so badly?
Elon Musk of Tesla and Jeff Bezos of Amazon have more than $350 billion in combined wealth and preside over two of the most valuable companies ever created. But when they’re not innovating on Earth, they have been focusing their considerable brain power on bringing a multiplanetary human habitat to reality.
For Mr. Musk, it’s through his other company, SpaceX, which has become an ever bigger player in the private space-technology arena. On top of satellite launches and other rocket innovations, the company announced it will send its first “all civilian” crew into orbit at the end of the year, in a mission called Inspiration4. SpaceX has already carried NASA astronauts to the International Space Station and is planning to transport more, as well as private astronauts, for a high price.
Most ambitiously, Mr. Musk has said that SpaceX will land humans on Mars by 2026. To do that, the private company will use a chunk of the close to $3 billion — including $850 million announced this week in a regulatory filing — that it has raised over the last year to finance this herculean effort.
While Mr. Musk might not be the first human to go to the red planet, he once told me that he wanted to die there, joking, “Just not on landing.”
Mr. Bezos, who is stepping down as chief executive of Amazon this year, is expected to accelerate his space-travel efforts through his company Blue Origin, whose tag line reads, in part, “Earth, in all its beauty, is just our starting place.”
Like SpaceX, Blue Origin is working on payload launches and reusable orbital launch vehicles, as well as on moon landing technology, to achieve what Mr. Bezos once called “low-cost access to space.” Blue Origin executives said recently that the company is close to blasting off into space with humans.
Mr. Bezos’ most extravagant notion, unveiled in 2019, is a vision of space colonies — spinning cylinders floating out there with all kinds of environments.
“These are very large structures, miles on end, and they hold a million people or more each,” he said, noting they are intended to relieve the stress on Earth and help make it more livable.
It’s probably good for space innovation that two billionaires are slugging it out and attracting all kinds of start-ups, investments and interest to the area. But all of their frantic aggression has been overshadowed of late by two spectacular efforts by NASA.
The two NASA missions delivered this week the kind of awe-inspiring moments that make one look up from the wretched news spewing out of our smartphones toward the stunning celestial beauty of the endless universe.
ImageA composite made from images sent by Perseverance rover shows the rim of Jezero Crater on the surface of Mars.
A composite made from images sent by Perseverance rover shows the rim of Jezero Crater on the surface of Mars.Credit...NASA, via Associated Press
The first was the batch of images from amazing high-definition cameras on the Perseverance rover, a car-size autonomous vehicle that touched down in the Jezero Crater on Mars last week. The photographs are so sharp that you can zoom in close enough to look at the holes in the rocks on the surface and even get a pretty good sense of the dirt itself. The larger panorama is just as arresting, a desert scene that is breathtakingly alien while also feeling quite familiar.
I found myself staring at the scenes for an hour, marveling that I can see the details of an elegant wind-carved boulder from a distance of 133.6 million miles. The $2.7 billion Mars mission includes a search for signs of ancient Martian life, sample-collecting and the flight of a helicopter called Ingenuity.
But the imagery from Mars was quickly topped by an even older NASA mission to Jupiter by the Juno space probe, which entered the planet’s orbit in 2016. It did some very close fly-bys recently that are yielding perhaps the most stunning photos that we’ve ever seen of the planet.
Image
NASA’s Juno mission captured this color-enhanced image of Jupiter’s cloud tops.Credit...NASA, via Agence France-Presse — Getty Images
Color-enhanced by citizen scientists from publicly available NASA data and images, the images show delicately swirling jet streams that look like a painting of quicksilver created by some space-faring artistic genius. I wish I could be riding on Juno myself to see up close the vast cyclones gather and the angry clouds seethe.
It was just a year ago that Juno sent back another image of Jupiter, looking like the best marble ever made, which NASA titled “Massive Beauty.”
Image
The Juno mission captured this look at the southern hemisphere of Jupiter on Feb. 17, 2020.Credit...JPL-Caltech/SwRI/MSSS, via NASA
Perhaps the fact that life on Earth feels so precarious at this moment explains, at least in part, why Mr. Bezos and Mr. Musk want to find ways to get off it.
But it’s important to keep in mind that these two men are just two voices among billions of earthlings. It is incumbent on the rest of us to take more control of how we are going to move into the brave new worlds beyond our own gem of a planet.
We have handed over so much of our fate to so few people over the last decades, especially when it comes to critical technology. As we take tentative steps toward leaving Earth, it feels like we are continuing to place too much of our trust in the hands of tech titans.
Think about it: We the people invented the internet, and the tech moguls pretty much own it. And we the people invented space travel, and it now looks as if the moguls could own that, too.
Let’s hope not. NASA, and other government space agencies around the world, need our continued support to increase space exploration.
I get that we have enormous needs on this planet, and money put toward space travel could instead be spent on improving lives here on Earth. But the risk to our planet from climate change means we have to think much bigger.
Keep in mind a hidden message that NASA engineers put onto the descent parachute of the Perseverance rover. The colors on the chute were a binary code that translates into “Dare mighty things.”
Coming from across the vast and empty universe, it was a message not meant just for Mr. Bezos and Mr. Musk. It was actually meant for all of us.
Stunning photos at:
https://www.nytimes.com/2021/02/26/opin ... 778d3e6de3
The role of science in development
Whether it be in our society, enhancing the educational quality or even beyond the surface of the Earth science has always contributed to the better of the world in which we live in.
Ask yourself what the world would be like without airplanes, boats, cars or even medical facilities to name a few. Indeed, science has definitely helped our world to become a better place.
Well, did you also know that many Islamic scholars have played a major role in the development of science as a whole? As a matter of fact, keep in mind that Muslim scholars emerged new scientific disciplines namely: algebra, trigonometry and chemistry as well as major advances in medicine, astronomy, engineering as well as agriculture.
Coming into the modern day, it is very clear that Islam as well as our Ismaili community still contributes to science in so many ways. One of the many ways in which we as a community contribute to science is by the work of the Aga Khan Development Network (AKDN).
A very good example of an institution using science to develop the modern day is the Mountain Society Research Institute from the University of Central Asia. This institution has done a fantastic job at boosting the development of rural areas by analysing challenges and creating solutions to them. One of many of their works include one of their projects was to conduct surveys in order to bring local people to the table to discuss conflicts of interest and the sharing of natural resources along the Kyrgyz-Tajik border. These working groups provide a real transfer of knowledge in the interest of the local communities. And the MSRI also uses statistical analysis by using scientific means being advanced technologies such as remote sensing to identify and quantify problems of natural resources in the area.
Just like any other aspect, science does not come without any bad side effects. One of many is the pollution that modern day cars and planes are causing due to the increase in the use of such vehicles in the modern world. This is causing a climate change and is weakening our ozone layer.
Thus, we can now conclude that although science has problems and harmful side effects without it, we would surely not have our lives to be as easy as it is today as it has truly excelled our quality of life despite some harmful side effects which may be overcome by us being responsible beings
https://the.ismaili/mozambique/the-isma ... evelopment
Whether it be in our society, enhancing the educational quality or even beyond the surface of the Earth science has always contributed to the better of the world in which we live in.
Ask yourself what the world would be like without airplanes, boats, cars or even medical facilities to name a few. Indeed, science has definitely helped our world to become a better place.
Well, did you also know that many Islamic scholars have played a major role in the development of science as a whole? As a matter of fact, keep in mind that Muslim scholars emerged new scientific disciplines namely: algebra, trigonometry and chemistry as well as major advances in medicine, astronomy, engineering as well as agriculture.
Coming into the modern day, it is very clear that Islam as well as our Ismaili community still contributes to science in so many ways. One of the many ways in which we as a community contribute to science is by the work of the Aga Khan Development Network (AKDN).
A very good example of an institution using science to develop the modern day is the Mountain Society Research Institute from the University of Central Asia. This institution has done a fantastic job at boosting the development of rural areas by analysing challenges and creating solutions to them. One of many of their works include one of their projects was to conduct surveys in order to bring local people to the table to discuss conflicts of interest and the sharing of natural resources along the Kyrgyz-Tajik border. These working groups provide a real transfer of knowledge in the interest of the local communities. And the MSRI also uses statistical analysis by using scientific means being advanced technologies such as remote sensing to identify and quantify problems of natural resources in the area.
Just like any other aspect, science does not come without any bad side effects. One of many is the pollution that modern day cars and planes are causing due to the increase in the use of such vehicles in the modern world. This is causing a climate change and is weakening our ozone layer.
Thus, we can now conclude that although science has problems and harmful side effects without it, we would surely not have our lives to be as easy as it is today as it has truly excelled our quality of life despite some harmful side effects which may be overcome by us being responsible beings
https://the.ismaili/mozambique/the-isma ... evelopment
Do You Really Need to Fly?
Videoconferencing is good enough to replace a lot of pointless business travel.
I once flew round-trip from San Francisco to London to participate in an hourlong discussion about a book. Another time it was San Francisco-Hong Kong, Hong Kong-Singapore and back again for two lunch meetings, each more lunch than meeting. I went to Atlanta once to interview an official who flaked out at the last minute. And there was that time in Miami: three days, 5,000 miles, hotel, rental car — and on the way back a sinking realization that the person I’d gone to profile was too dull for a profile.
I confess to this partial history of gratuitous business travel knowing that I’ll be screenshot and virally mocked: Check out the New York Times columnist whining about all the fabulous trips he’s had to endure!
But I’ll accept the flagellation, for I see now how I’ve sinned. We are a year into a pandemic that has kept much of the world grounded. Yet in many sectors that once relied on in-person sessions, big deals are still getting done, sales are still being closed and networkers can’t quit networking.
Face-to-face interactions were said to justify the $1.4 trillion spent globally on business travel in 2019. In 2020, business travel was slashed in half, our faces were stuck in screens, and yet many of the companies used to spending boatloads on travel are doing just fine.
More...
https://www.nytimes.com/2021/03/10/opin ... 778d3e6de3
Videoconferencing is good enough to replace a lot of pointless business travel.
I once flew round-trip from San Francisco to London to participate in an hourlong discussion about a book. Another time it was San Francisco-Hong Kong, Hong Kong-Singapore and back again for two lunch meetings, each more lunch than meeting. I went to Atlanta once to interview an official who flaked out at the last minute. And there was that time in Miami: three days, 5,000 miles, hotel, rental car — and on the way back a sinking realization that the person I’d gone to profile was too dull for a profile.
I confess to this partial history of gratuitous business travel knowing that I’ll be screenshot and virally mocked: Check out the New York Times columnist whining about all the fabulous trips he’s had to endure!
But I’ll accept the flagellation, for I see now how I’ve sinned. We are a year into a pandemic that has kept much of the world grounded. Yet in many sectors that once relied on in-person sessions, big deals are still getting done, sales are still being closed and networkers can’t quit networking.
Face-to-face interactions were said to justify the $1.4 trillion spent globally on business travel in 2019. In 2020, business travel was slashed in half, our faces were stuck in screens, and yet many of the companies used to spending boatloads on travel are doing just fine.
More...
https://www.nytimes.com/2021/03/10/opin ... 778d3e6de3
We Need Laws to Take On Racism and Sexism in Hiring Technology
Artificial intelligence used to evaluate job candidates must not become a tool that exacerbates discrimination.
American democracy depends on everyone having equal access to work. But in reality, people of color, women, those with disabilities and other marginalized groups experience unemployment or underemployment at disproportionately high rates, especially amid the economic fallout of the Covid-19 pandemic. Now the use of artificial intelligence technology for hiring may exacerbate those problems and further bake bias into the hiring process.
At the moment, the New York City Council is debating a proposed new law that would regulate automated tools used to evaluate job candidates and employees. If done right, the law could make a real difference in the city and have wide influence nationally: In the absence of federal regulation, states and cities have used models from other localities to regulate emerging technologies.
Over the past few years, an increasing number of employers have started using artificial intelligence and other automated tools to speed up hiring, save money and screen job applicants without in-person interaction. These are all features that are increasingly attractive during the pandemic. These technologies include screeners that scan résumés for key words, games that claim to assess attributes such as generosity and appetite for risk, and even emotion analyzers that claim to read facial and vocal cues to predict if candidates will be engaged and team players.
In most cases, vendors train these tools to analyze workers who are deemed successful by their employer and to measure whether job applicants have similar traits. This approach can worsen underrepresentation and social divides if, for example, Latino men or Black women are inadequately represented in the pool of employees. In another case, a résumé-screening tool could identify Ivy League schools on successful employees’ résumés and then downgrade résumés from historically Black or women’s colleges.
In its current form, the council’s bill would require vendors that sell automated assessment tools to audit them for bias and discrimination, checking whether, for example, a tool selects male candidates at a higher rate than female candidates. It would also require vendors to tell job applicants the characteristics the test claims to measure. This approach could be helpful: It would shed light on how job applicants are screened and force vendors to think critically about potential discriminatory effects. But for the law to have teeth, we recommend several important additional protections.
Refer your friends to The New York Times.
They’ll enjoy our special rate of $1 (Cdn) a week.
The measure must require companies to publicly disclose what they find when they audit their tech for bias. Despite pressure to limit its scope, the City Council must ensure that the bill would address discrimination in all forms — on the basis of not only race or gender but also disability, sexual orientation and other protected characteristics.
These audits should consider the circumstances of people who are multiply marginalized — for example, Black women, who may be discriminated against because they are both Black and women. Bias audits conducted by companies typically don’t do this.
The bill should also require validity testing, to ensure that the tools actually measure what they claim to, and it must make certain that they measure characteristics that are relevant for the job. Such testing would interrogate whether, for example, candidates’ efforts to blow up a balloon in an online game really indicate their appetite for risk in the real world — and whether risk-taking is necessary for the job. Mandatory validity testing would also eliminate bad actors whose hiring tools do arbitrary things like assess job applicants’ personalities differently based on subtle changes in the background of their video interviews.
In addition, the City Council must require vendors to tell candidates how they will be screened by an automated tool before the screening, so candidates know what to expect. People who are blind, for example, may not suspect that their video interview could score poorly if they fail to make eye contact with the camera. If they know what is being tested, they can engage with the employer to seek a fairer test. The proposed legislation currently before the City Council would require companies to alert candidates within 30 days if they have been evaluated using A.I., but only after they have taken the test.
More...
https://www.nytimes.com/2021/03/17/opin ... 778d3e6de3
Artificial intelligence used to evaluate job candidates must not become a tool that exacerbates discrimination.
American democracy depends on everyone having equal access to work. But in reality, people of color, women, those with disabilities and other marginalized groups experience unemployment or underemployment at disproportionately high rates, especially amid the economic fallout of the Covid-19 pandemic. Now the use of artificial intelligence technology for hiring may exacerbate those problems and further bake bias into the hiring process.
At the moment, the New York City Council is debating a proposed new law that would regulate automated tools used to evaluate job candidates and employees. If done right, the law could make a real difference in the city and have wide influence nationally: In the absence of federal regulation, states and cities have used models from other localities to regulate emerging technologies.
Over the past few years, an increasing number of employers have started using artificial intelligence and other automated tools to speed up hiring, save money and screen job applicants without in-person interaction. These are all features that are increasingly attractive during the pandemic. These technologies include screeners that scan résumés for key words, games that claim to assess attributes such as generosity and appetite for risk, and even emotion analyzers that claim to read facial and vocal cues to predict if candidates will be engaged and team players.
In most cases, vendors train these tools to analyze workers who are deemed successful by their employer and to measure whether job applicants have similar traits. This approach can worsen underrepresentation and social divides if, for example, Latino men or Black women are inadequately represented in the pool of employees. In another case, a résumé-screening tool could identify Ivy League schools on successful employees’ résumés and then downgrade résumés from historically Black or women’s colleges.
In its current form, the council’s bill would require vendors that sell automated assessment tools to audit them for bias and discrimination, checking whether, for example, a tool selects male candidates at a higher rate than female candidates. It would also require vendors to tell job applicants the characteristics the test claims to measure. This approach could be helpful: It would shed light on how job applicants are screened and force vendors to think critically about potential discriminatory effects. But for the law to have teeth, we recommend several important additional protections.
Refer your friends to The New York Times.
They’ll enjoy our special rate of $1 (Cdn) a week.
The measure must require companies to publicly disclose what they find when they audit their tech for bias. Despite pressure to limit its scope, the City Council must ensure that the bill would address discrimination in all forms — on the basis of not only race or gender but also disability, sexual orientation and other protected characteristics.
These audits should consider the circumstances of people who are multiply marginalized — for example, Black women, who may be discriminated against because they are both Black and women. Bias audits conducted by companies typically don’t do this.
The bill should also require validity testing, to ensure that the tools actually measure what they claim to, and it must make certain that they measure characteristics that are relevant for the job. Such testing would interrogate whether, for example, candidates’ efforts to blow up a balloon in an online game really indicate their appetite for risk in the real world — and whether risk-taking is necessary for the job. Mandatory validity testing would also eliminate bad actors whose hiring tools do arbitrary things like assess job applicants’ personalities differently based on subtle changes in the background of their video interviews.
In addition, the City Council must require vendors to tell candidates how they will be screened by an automated tool before the screening, so candidates know what to expect. People who are blind, for example, may not suspect that their video interview could score poorly if they fail to make eye contact with the camera. If they know what is being tested, they can engage with the employer to seek a fairer test. The proposed legislation currently before the City Council would require companies to alert candidates within 30 days if they have been evaluated using A.I., but only after they have taken the test.
More...
https://www.nytimes.com/2021/03/17/opin ... 778d3e6de3
Book Review of 2 Books
Can Humans Be Replaced by Machines?
GENIUS MAKERS
The Mavericks Who Brought AI to Google, Facebook, and the World
By Cade Metz
FUTUREPROOF
9 Rules for Humans in the Age of Automation
By Kevin Roose
It is as hard to understand a technological revolution while it is happening as to know what a hurricane will do while the winds are still gaining speed. Through the emergence of technologies now regarded as basic elements of modernity — electric power, the arrival of automobiles and airplanes and now the internet — people have tried, with hit-and-miss success, to assess their future impact.
The most persistent and touching error has been the ever-dashed hope that, as machines are able to do more work, human beings will be freed to do less, and will have more time for culture and contemplation. The greatest imaginative challenge seems to be foreseeing which changes will arrive sooner than expected (computers outplaying chess grandmasters), and which will be surprisingly slow (flying cars). The tech-world saying is that people chronically overestimate what technology can do in a year, and underestimate what it can do in a decade and beyond.
So it inevitably goes with one of this moment’s revolutions, the combination of ever-higher computing speed and vastly more-voluminous data that together are the foundations of artificial intelligence, or A.I. Depending on how you count, the A.I. revolution began about 60 years ago, dating to the dawn of the computer age and a concept called the “Perceptron” — or has just barely begun. Its implications range from utilities already routinized into daily life (like real-time updates on traffic flow), to ominous steps toward “1984”-style perpetual-surveillance states (like China’s facial recognition system, which within one second can match a name to a photo of any person within the country).
Looking back, it’s easy to recognize the damage done by waiting too long to face important choices about technology — or leaving those choices to whatever a private interest might find profitable. These go from the role of the automobile in creating America’s sprawl-suburb landscape to the role of Facebook and other companies in fostering the disinformation society.
“Genius Makers” and “Futureproof,” both by experienced technology reporters now at The New York Times, are part of a rapidly growing literature attempting to make sense of the A.I. hurricane we are living through. These are very different kinds of books — Cade Metz’s is mainly reportorial, about how we got here; Kevin Roose’s is a casual-toned but carefully constructed set of guidelines about where individuals and societies should go next. But each valuably suggests a framework for the right questions to ask now about A.I. and its use.
“Genius Makers” is about the people who have built the A.I. world — scientists, engineers, linguists, gamers — more than about the technology itself, or its good and bad effects. The fundamental technical debates and discoveries on which A.I. is based are a background to the individual profiles and corporate-drama scenes Metz presents. The longest running, most consequential debate is between proponents of two different approaches to increasing computerized “intelligence,” which can be oversimplified as “thinking like a person” versus “thinking like a machine.”
The first boils down to using “neural networks” — the neurons in this case being computer circuits — that are designed to conduct endless trial-and-error experiments and improve their accuracy as they match their conclusions against real-world data. The second boils down to equipping a computer with detailed sets of rules — rules of syntax and semantics for language translation, rules of syndrome-pattern for medical diagnoses. Much of Metz’s story runs from excitement for neural networks in the early 1960s, to an “A.I. winter” in the 1970s, when that era’s computers proved too limited to do the job, to a recent revival of a neural-network approach toward “deep learning,” which is essentially the result of the faster and more complex self-correction of today’s enormously capable machines.
Metz tells the story of more than a dozen of the world’s A.I. pioneers, of whom two come across most vividly. One is Geoffrey Hinton, an English-born computer scientist now in his mid-70s, who is introduced in the prologue as “The Man Who Didn’t Sit Down.” Because of a back condition, Hinton finds it excruciating to sit in a chair — and he has not done so since 2005. Instead he spends his waking hours standing, walking or lying down. This means, among other things, that he cannot take commercial airplane flights. In one crucial scene of Metz’s tale he is placed on a makeshift bed on the floor of a Gulfstream, and then strapped down for the flight across the Atlantic to an A.I. meeting in London.
The other most prominent figure in Metz’s book is Demis Hassabis, who grew up in London and is now in his mid-40s. He is a former chess prodigy and electronic-games entrepreneur and designer who founded a company called DeepMind, now a leading force in the quest for the grail of A.G.I., or artificial general intelligence.
“Superintelligence was possible and he believed it could be dangerous, but he also believed it was still many years away,” Metz writes of Hassabis. “‘We need to use the downtime, when things are calm, to prepare for when things get serious in the decades to come,’ he has said. ‘The time we have now is valuable, and we need to make use of it.’”
Making use of that time is the entire theme of “Futureproof.” Roose’s book has two sections: “The Machines,” about the surprising potential and equally surprising limits of automated intelligence, and “The Rules,” which offers nine maxims for how people and organizations can best respond. In structure the book might look like a familiar, business-oriented “Secrets of Success” tract. But it is actually a concise, insightful and sophisticated guide to maintaining humane values in an age of new machines.
In the book’s first section, Roose lays out distinctions between jobs and industries in which A.I. is likely to dominate, and those where it still disappoints. Computers are unmatchable in speed and complexity within known boundaries — the rules of chess, even the way points an airplane must follow through the sky. (Today’s airlines can use A.I. guidance to “autoland” without a pilot, whereas today’s cars can’t safely “autodrive” down your street, in part because no pedestrian is going to lurch into a plane’s descent path.) But the more fluid the setting, the greater the difficulties. “Most A.I. is built to solve a single problem, and fails when you ask it to do something else,” he writes. “And so far, A.I. has fared poorly at what is called ‘transfer learning’ — using information gained while solving one problem to do something else.”
Roose links this technological transformation to the many others that societies have undergone. Nearly all have eventually made humanity richer overall — in the long run. But, he writes, “we don’t live in the aggregate or over the long term. We experience major economic shifts as individuals with finite careers and life spans.”
“Futureproof” offers suggestions for individual pursuits and for social policy. But the most eloquent parts of the book come when Roose moves from preserving livelihoods to protecting basic humanity.
Social-media algorithms, he points out, are ever more precisely honed to attract and hold your attention. Click on the next video, scroll to the next tweet. Thus technology becomes humanity’s master, rather than the reverse. “There are established ways to train our brains to better guard our attention,” writes Roose — who, it is worth noting, is a millennial-era writer who specializes in digital technology. “For me, the best attention-guarding ritual of all is reading — sitting down to read physical, printed books for long stretches of time, with my phone sequestered somewhere far away.”
Technology’s effects are driven by technology itself, but even more by human choice. Roose warns against treating “technological change as a disembodied natural force that simply happens to us, like gravity or thermodynamics.” Instead we all should realize that “none of this is predetermined. … Regulators, not robots, decide what limits to place on emerging technologies like facial recognition and targeted digital advertising.” The message from both of these books is that the sky is not falling — but it could. There is time to make a choice.
https://www.nytimes.com/2021/03/19/book ... ks_norm_20
Can Humans Be Replaced by Machines?
GENIUS MAKERS
The Mavericks Who Brought AI to Google, Facebook, and the World
By Cade Metz
FUTUREPROOF
9 Rules for Humans in the Age of Automation
By Kevin Roose
It is as hard to understand a technological revolution while it is happening as to know what a hurricane will do while the winds are still gaining speed. Through the emergence of technologies now regarded as basic elements of modernity — electric power, the arrival of automobiles and airplanes and now the internet — people have tried, with hit-and-miss success, to assess their future impact.
The most persistent and touching error has been the ever-dashed hope that, as machines are able to do more work, human beings will be freed to do less, and will have more time for culture and contemplation. The greatest imaginative challenge seems to be foreseeing which changes will arrive sooner than expected (computers outplaying chess grandmasters), and which will be surprisingly slow (flying cars). The tech-world saying is that people chronically overestimate what technology can do in a year, and underestimate what it can do in a decade and beyond.
So it inevitably goes with one of this moment’s revolutions, the combination of ever-higher computing speed and vastly more-voluminous data that together are the foundations of artificial intelligence, or A.I. Depending on how you count, the A.I. revolution began about 60 years ago, dating to the dawn of the computer age and a concept called the “Perceptron” — or has just barely begun. Its implications range from utilities already routinized into daily life (like real-time updates on traffic flow), to ominous steps toward “1984”-style perpetual-surveillance states (like China’s facial recognition system, which within one second can match a name to a photo of any person within the country).
Looking back, it’s easy to recognize the damage done by waiting too long to face important choices about technology — or leaving those choices to whatever a private interest might find profitable. These go from the role of the automobile in creating America’s sprawl-suburb landscape to the role of Facebook and other companies in fostering the disinformation society.
“Genius Makers” and “Futureproof,” both by experienced technology reporters now at The New York Times, are part of a rapidly growing literature attempting to make sense of the A.I. hurricane we are living through. These are very different kinds of books — Cade Metz’s is mainly reportorial, about how we got here; Kevin Roose’s is a casual-toned but carefully constructed set of guidelines about where individuals and societies should go next. But each valuably suggests a framework for the right questions to ask now about A.I. and its use.
“Genius Makers” is about the people who have built the A.I. world — scientists, engineers, linguists, gamers — more than about the technology itself, or its good and bad effects. The fundamental technical debates and discoveries on which A.I. is based are a background to the individual profiles and corporate-drama scenes Metz presents. The longest running, most consequential debate is between proponents of two different approaches to increasing computerized “intelligence,” which can be oversimplified as “thinking like a person” versus “thinking like a machine.”
The first boils down to using “neural networks” — the neurons in this case being computer circuits — that are designed to conduct endless trial-and-error experiments and improve their accuracy as they match their conclusions against real-world data. The second boils down to equipping a computer with detailed sets of rules — rules of syntax and semantics for language translation, rules of syndrome-pattern for medical diagnoses. Much of Metz’s story runs from excitement for neural networks in the early 1960s, to an “A.I. winter” in the 1970s, when that era’s computers proved too limited to do the job, to a recent revival of a neural-network approach toward “deep learning,” which is essentially the result of the faster and more complex self-correction of today’s enormously capable machines.
Metz tells the story of more than a dozen of the world’s A.I. pioneers, of whom two come across most vividly. One is Geoffrey Hinton, an English-born computer scientist now in his mid-70s, who is introduced in the prologue as “The Man Who Didn’t Sit Down.” Because of a back condition, Hinton finds it excruciating to sit in a chair — and he has not done so since 2005. Instead he spends his waking hours standing, walking or lying down. This means, among other things, that he cannot take commercial airplane flights. In one crucial scene of Metz’s tale he is placed on a makeshift bed on the floor of a Gulfstream, and then strapped down for the flight across the Atlantic to an A.I. meeting in London.
The other most prominent figure in Metz’s book is Demis Hassabis, who grew up in London and is now in his mid-40s. He is a former chess prodigy and electronic-games entrepreneur and designer who founded a company called DeepMind, now a leading force in the quest for the grail of A.G.I., or artificial general intelligence.
“Superintelligence was possible and he believed it could be dangerous, but he also believed it was still many years away,” Metz writes of Hassabis. “‘We need to use the downtime, when things are calm, to prepare for when things get serious in the decades to come,’ he has said. ‘The time we have now is valuable, and we need to make use of it.’”
Making use of that time is the entire theme of “Futureproof.” Roose’s book has two sections: “The Machines,” about the surprising potential and equally surprising limits of automated intelligence, and “The Rules,” which offers nine maxims for how people and organizations can best respond. In structure the book might look like a familiar, business-oriented “Secrets of Success” tract. But it is actually a concise, insightful and sophisticated guide to maintaining humane values in an age of new machines.
In the book’s first section, Roose lays out distinctions between jobs and industries in which A.I. is likely to dominate, and those where it still disappoints. Computers are unmatchable in speed and complexity within known boundaries — the rules of chess, even the way points an airplane must follow through the sky. (Today’s airlines can use A.I. guidance to “autoland” without a pilot, whereas today’s cars can’t safely “autodrive” down your street, in part because no pedestrian is going to lurch into a plane’s descent path.) But the more fluid the setting, the greater the difficulties. “Most A.I. is built to solve a single problem, and fails when you ask it to do something else,” he writes. “And so far, A.I. has fared poorly at what is called ‘transfer learning’ — using information gained while solving one problem to do something else.”
Roose links this technological transformation to the many others that societies have undergone. Nearly all have eventually made humanity richer overall — in the long run. But, he writes, “we don’t live in the aggregate or over the long term. We experience major economic shifts as individuals with finite careers and life spans.”
“Futureproof” offers suggestions for individual pursuits and for social policy. But the most eloquent parts of the book come when Roose moves from preserving livelihoods to protecting basic humanity.
Social-media algorithms, he points out, are ever more precisely honed to attract and hold your attention. Click on the next video, scroll to the next tweet. Thus technology becomes humanity’s master, rather than the reverse. “There are established ways to train our brains to better guard our attention,” writes Roose — who, it is worth noting, is a millennial-era writer who specializes in digital technology. “For me, the best attention-guarding ritual of all is reading — sitting down to read physical, printed books for long stretches of time, with my phone sequestered somewhere far away.”
Technology’s effects are driven by technology itself, but even more by human choice. Roose warns against treating “technological change as a disembodied natural force that simply happens to us, like gravity or thermodynamics.” Instead we all should realize that “none of this is predetermined. … Regulators, not robots, decide what limits to place on emerging technologies like facial recognition and targeted digital advertising.” The message from both of these books is that the sky is not falling — but it could. There is time to make a choice.
https://www.nytimes.com/2021/03/19/book ... ks_norm_20
Scientists created a hybrid human-monkey embryo in a lab, sparking concerns others could take the experiment too far
- Scientists injected human stem cells into macaque embryos into a study on human development.
- Some of the embryos continued to develop for 20 days, researchers said.
- But the experiment has sparked an ethical debate among scientists.
- See more stories on Insider's business page.
Monkey embryos containing human cells were kept alive for 20 days in an experiment carried out by a US-Chinese team.
The embryos were made by injecting human stem cells into macaque embryos as part of research into early human development, and results were published in the journal "Cell." Only some of the embryos survived for 20 days, the research said.
The research team was led by Juan Carlos Izpisua Belmonte of the Salk Institute, who helped make a mixed-species embryo of a human and a pig in 2017.
"As we are unable to conduct certain types of experiments in humans, it is essential that we have better models to more accurately study and understand human biology and disease," he said in a press release about the study. "An important goal of experimental biology is the development of model systems that allow for the study of human diseases under in vivo conditions."
The new study has sparked an ethics debate among some scientists concerned about creating embryos that are part human and part animal.
Anna Smajdor, a biomedical ethics lecturer and researcher at the University of East Anglia's Norwich Medical School, told the BBC: "The scientists behind this research state that these chimeric embryos offer new opportunities, because 'we are unable to conduct certain types of experiments in humans'. But whether these embryos are human or not is open to question."
https://www.businessinsider.com/scienti ... lab-2021-4
- Scientists injected human stem cells into macaque embryos into a study on human development.
- Some of the embryos continued to develop for 20 days, researchers said.
- But the experiment has sparked an ethical debate among scientists.
- See more stories on Insider's business page.
Monkey embryos containing human cells were kept alive for 20 days in an experiment carried out by a US-Chinese team.
The embryos were made by injecting human stem cells into macaque embryos as part of research into early human development, and results were published in the journal "Cell." Only some of the embryos survived for 20 days, the research said.
The research team was led by Juan Carlos Izpisua Belmonte of the Salk Institute, who helped make a mixed-species embryo of a human and a pig in 2017.
"As we are unable to conduct certain types of experiments in humans, it is essential that we have better models to more accurately study and understand human biology and disease," he said in a press release about the study. "An important goal of experimental biology is the development of model systems that allow for the study of human diseases under in vivo conditions."
The new study has sparked an ethics debate among some scientists concerned about creating embryos that are part human and part animal.
Anna Smajdor, a biomedical ethics lecturer and researcher at the University of East Anglia's Norwich Medical School, told the BBC: "The scientists behind this research state that these chimeric embryos offer new opportunities, because 'we are unable to conduct certain types of experiments in humans'. But whether these embryos are human or not is open to question."
https://www.businessinsider.com/scienti ... lab-2021-4
The Robot Surgeon Will See You Now
Real scalpels, artificial intelligence — what could go wrong?
Sitting on a stool several feet from a long-armed robot, Dr. Danyal Fer wrapped his fingers around two metal handles near his chest.
As he moved the handles — up and down, left and right — the robot mimicked each small motion with its own two arms. Then, when he pinched his thumb and forefinger together, one of the robot’s tiny claws did much the same. This is how surgeons like Dr. Fer have long used robots when operating on patients. They can remove a prostate from a patient while sitting at a computer console across the room.
But after this brief demonstration, Dr. Fer and his fellow researchers at the University of California, Berkeley, showed how they hope to advance the state of the art. Dr. Fer let go of the handles, and a new kind of computer software took over. As he and the other researchers looked on, the robot started to move entirely on its own.
With one claw, the machine lifted a tiny plastic ring from an equally tiny peg on the table, passed the ring from one claw to the other, moved it across the table and gingerly hooked it onto a new peg. Then the robot did the same with several more rings, completing the task as quickly as it had when guided by Dr. Fer.
The training exercise was originally designed for humans; moving the rings from peg to peg is how surgeons learn to operate robots like the one in Berkeley. Now, an automated robot performing the test can match or even exceed a human in dexterity, precision and speed, according to a new research paper from the Berkeley team.
The project is a part of a much wider effort to bring artificial intelligence into the operating room. Using many of the same technologies that underpin self-driving cars, autonomous drones and warehouse robots, researchers are working to automate surgical robots too. These methods are still a long way from everyday use, but progress is accelerating.
Video and more at:
https://www.nytimes.com/2021/04/30/tech ... 778d3e6de3
Real scalpels, artificial intelligence — what could go wrong?
Sitting on a stool several feet from a long-armed robot, Dr. Danyal Fer wrapped his fingers around two metal handles near his chest.
As he moved the handles — up and down, left and right — the robot mimicked each small motion with its own two arms. Then, when he pinched his thumb and forefinger together, one of the robot’s tiny claws did much the same. This is how surgeons like Dr. Fer have long used robots when operating on patients. They can remove a prostate from a patient while sitting at a computer console across the room.
But after this brief demonstration, Dr. Fer and his fellow researchers at the University of California, Berkeley, showed how they hope to advance the state of the art. Dr. Fer let go of the handles, and a new kind of computer software took over. As he and the other researchers looked on, the robot started to move entirely on its own.
With one claw, the machine lifted a tiny plastic ring from an equally tiny peg on the table, passed the ring from one claw to the other, moved it across the table and gingerly hooked it onto a new peg. Then the robot did the same with several more rings, completing the task as quickly as it had when guided by Dr. Fer.
The training exercise was originally designed for humans; moving the rings from peg to peg is how surgeons learn to operate robots like the one in Berkeley. Now, an automated robot performing the test can match or even exceed a human in dexterity, precision and speed, according to a new research paper from the Berkeley team.
The project is a part of a much wider effort to bring artificial intelligence into the operating room. Using many of the same technologies that underpin self-driving cars, autonomous drones and warehouse robots, researchers are working to automate surgical robots too. These methods are still a long way from everyday use, but progress is accelerating.
Video and more at:
https://www.nytimes.com/2021/04/30/tech ... 778d3e6de3
This artificial, intelligent world
Artificial intelligence has spread like wildfire. It's not only gotten everywhere in scientific research sphere, but out into the world, wrapping itself even in the robes of justice. This weekend's reads are all about AI—its shortcomings, its successes, and how it's already shaping our lives.
The criminal justice system uses artificial intelligence to imprison Black Americans
Computer programs used in 46 states incorrectly label Black defendants as “high-risk” at twice the rate as white defendants
https://massivesci.com/articles/machine ... -fairness/
*******
Artificial intelligence isn’t very intelligent and won’t be any time soon
For all of the recent advances in artificial intelligence, machines still struggle with common sense
https://massivesci.com/articles/artific ... 0%9F%A4%96
********
Is artificial intelligence worsening COVID-19′s toll on Black Americans?
Experts are asking if biased algorithms exacerbate health disparities
https://massivesci.com/articles/ai-medi ... 0%9F%A4%96
********
Using artificial intelligence to discover new treatments for superbugs
Machine learning is pointing researchers toward molecules that are structurally different from current antibiotics
https://massivesci.com/notes/machine-le ... 0%9F%A4%96
*******
Artificial intelligence isn’t a ‘black box.’ It’s a key to studying the brain
Algorithms can help us see how our unconscious processes work – if we understand their language
https://massivesci.com/articles/artific ... +%2805-14-
21%29&utm_content=This+artificial%2C+intelligent+world+%F0%9F%A4%96
https://www.getdrip.com/deliveries/n9m3 ... 0%9F%A4%96
Artificial intelligence has spread like wildfire. It's not only gotten everywhere in scientific research sphere, but out into the world, wrapping itself even in the robes of justice. This weekend's reads are all about AI—its shortcomings, its successes, and how it's already shaping our lives.
The criminal justice system uses artificial intelligence to imprison Black Americans
Computer programs used in 46 states incorrectly label Black defendants as “high-risk” at twice the rate as white defendants
https://massivesci.com/articles/machine ... -fairness/
*******
Artificial intelligence isn’t very intelligent and won’t be any time soon
For all of the recent advances in artificial intelligence, machines still struggle with common sense
https://massivesci.com/articles/artific ... 0%9F%A4%96
********
Is artificial intelligence worsening COVID-19′s toll on Black Americans?
Experts are asking if biased algorithms exacerbate health disparities
https://massivesci.com/articles/ai-medi ... 0%9F%A4%96
********
Using artificial intelligence to discover new treatments for superbugs
Machine learning is pointing researchers toward molecules that are structurally different from current antibiotics
https://massivesci.com/notes/machine-le ... 0%9F%A4%96
*******
Artificial intelligence isn’t a ‘black box.’ It’s a key to studying the brain
Algorithms can help us see how our unconscious processes work – if we understand their language
https://massivesci.com/articles/artific ... +%2805-14-
21%29&utm_content=This+artificial%2C+intelligent+world+%F0%9F%A4%96
https://www.getdrip.com/deliveries/n9m3 ... 0%9F%A4%96
Interview with Rahim Hirji, technology consultant and country manager at Quizlet

In this insightful conversation with The Ismaili, Rahim Hirji details the ways in which technology is having a growing influence on our everyday lives, covering topics from artifical intelligence and robotics, to social media and education. Detailing some of the opportunities and risks this presents, Rahim suggests how we might prepare for an increasingly digital future.
The Covid-19 pandemic has been referred to as the ‘great accelerator’ of digital transformation. Who stands to benefit and lose from such developments?
We saw digitally native companies benefit most from the pandemic and companies like Amazon and Netflix took centre stage as the default options in their categories. While this great acceleration towards a digital-centric world has been an upheaval to many industries, this reset brings with it much opportunity.
Industries like pharmaceuticals, online marketplaces like Etsy and Alibaba, logistics companies that have delivered everything we needed during the lockdown, and communication platforms like Zoom have all benefited from the pandemic. The leaders in these areas will continue to win, and in some ways, alongside the leading technology companies, they form part of the new infrastructure for the future. Companies will “plug-in” to this infrastructure and become dependent on it. If you are taking a new product to market, you would have to consider distributing via Amazon and its supply chain. From a work perspective, many companies have realised they can operate remotely and have reworked how teams are run, allowing employees to work from wherever they want using collaborative tools like Slack, Notion, and Asana.
There have been some obvious casualties to the pandemic, from traditional retail to cinema to tourism, but it will be interesting to see industry segments that are now required to rethink themselves further from within, from hospitality to healthcare to manufacturing. In these and in others, new business models will be defined, and redefined, in order to survive. On a personal note, I’m also interested to see how the school education sector reforms itself to protect against the catastrophe of students missing out on large parts of their learning and formation.
How are Educational Technology (EdTech) solutions disrupting traditional learning processes?
As with many industries, the education sector has been eroded with the adoption of digital technology within the overall learning process, and we have seen an escalation of change during the pandemic. There are many areas of change but I see three main areas of disruption from EdTech:
More data-driven insight: with tracking at every stage of the learning process, the potential for bespoke personalised learning will become more prevalent, meaning that everyone could have a unique learning experience.
Globally accessible education: it’s now easier and cheaper than ever before to pick up anything from basic skills to Ivy league type educational instruction. In fact, using platforms like Coursera and edX, you can acquire the equivalent of a university course for free. You may not get the full qualification but this sort of accessibility can help you adapt in these rapidly changing times.
Immersive learning: there’s a concept called blended learning which involves a combination of traditional classroom-based instruction with online learning. I see this evolving to include additional emerging mediums as technology reduces in price. Imagine a virtual reality history lesson where a student is transported to Ancient Greece, homework being supported by prompting chatbots, or language learning being practised with Google Assistant or Alexa.
As we move forward through this pandemic, I’m hopeful that EdTech will support a wider realisation that education doesn’t and shouldn’t just stop when one reaches the end of their formal school or university phase. EdTech solutions will support lifelong and student-driven learning whereby we learn, skill-up, and reskill on a continuous basis.
Young people today have never lived in a world without social media. What are the implications of this for the future of society?
The interference of social media in our actual lives is a very real problem, especially as some allow it to shape our very deepest thoughts. But the use of social media has transcended past real lives and those in the midst of this bubble are likely to have more followers or connections than they do actual friends that they have deep relationships with and know in real life.
In my mind, there are two specific implications for the future:
The first one is that, despite our want and need to control a digital universe, it does exist. Many of us have colleagues that we haven’t met, or suppliers that we only converse with online. So my point here is that we need to be good at conversing selectively where we need to and where it can be beneficial for us, but that we also need to embrace real-life interaction which takes me on to my second point.
Soft skills like communication, critical thinking, emotional intelligence, and teamwork are the very skills that become important in an age of artificial intelligence (AI), machines, and technology. And these are the skills that are put at risk in an age where people live online. It will become increasingly important for us all to over-index in developing these skills to be able to stand out and lead. It’s this juxtaposition of not having the skills because of society but also needing them for society. This needs to be actively addressed. Skills like coding, data analysis, and appreciation of technology will be at the forefront of the minds of those going into education today, but without the soft skills, talented and bright individuals will not reach their full potential in work and life. Parents of children under 18 will need to make an extra effort to coach and support their sons and daughters to adequately prepare them for the future, to actively allow experiences that support personal growth in areas like problem-solving, creative thinking, global citizenship, as well as key interpersonal skills.
What is your take on artificial intelligence — could the risks outweigh the rewards?
We’ve all heard some of the outlandish predictions that surround artificial intelligence, and actually, if you play out how automation and machine learning are evolving, there is some validity that AI could do all of the crazy things we hear about and then some. The reality is that artificial intelligence is already powering our everyday lives: predictive search and algorithmic results when we use Google, recommendations when we use Amazon or Netflix, personalised advertising that follows us around the Internet, price setting on what you might be willing to pay for your Uber, or recalculating your route through traffic on Google Maps — as well as that bespoke experience as you scroll through LinkedIn, Instagram, or TikTok. And if AI is already powering our world and our habits, how do we take back control?
I can’t stress enough how important a technology AI is and will be over the coming years, from disease detection and diagnosis, fully autonomous cars reducing deaths, automated investment of our funds, and virtual tutors supporting teachers in and out of the classroom. If we think about how revolutionary the Internet has been for us, artificial intelligence will build upon that world.
But with revolution comes pain. Over the coming years, we will see much misuse of technology: monitoring of our behaviour in ways that we can’t quite imagine at the moment, and everything from privacy violations and ethical issues, to problems like weapons automation. We also have to face the existential risk that may present itself as AI becomes “superintelligent” — more than humans ourselves. We can’t directly control this risk, but we can be circumspect when controlling our own data, and make sure that we are not a slave to the technology.
https://the.ismaili/global/news/feature ... er-quizlet

In this insightful conversation with The Ismaili, Rahim Hirji details the ways in which technology is having a growing influence on our everyday lives, covering topics from artifical intelligence and robotics, to social media and education. Detailing some of the opportunities and risks this presents, Rahim suggests how we might prepare for an increasingly digital future.
The Covid-19 pandemic has been referred to as the ‘great accelerator’ of digital transformation. Who stands to benefit and lose from such developments?
We saw digitally native companies benefit most from the pandemic and companies like Amazon and Netflix took centre stage as the default options in their categories. While this great acceleration towards a digital-centric world has been an upheaval to many industries, this reset brings with it much opportunity.
Industries like pharmaceuticals, online marketplaces like Etsy and Alibaba, logistics companies that have delivered everything we needed during the lockdown, and communication platforms like Zoom have all benefited from the pandemic. The leaders in these areas will continue to win, and in some ways, alongside the leading technology companies, they form part of the new infrastructure for the future. Companies will “plug-in” to this infrastructure and become dependent on it. If you are taking a new product to market, you would have to consider distributing via Amazon and its supply chain. From a work perspective, many companies have realised they can operate remotely and have reworked how teams are run, allowing employees to work from wherever they want using collaborative tools like Slack, Notion, and Asana.
There have been some obvious casualties to the pandemic, from traditional retail to cinema to tourism, but it will be interesting to see industry segments that are now required to rethink themselves further from within, from hospitality to healthcare to manufacturing. In these and in others, new business models will be defined, and redefined, in order to survive. On a personal note, I’m also interested to see how the school education sector reforms itself to protect against the catastrophe of students missing out on large parts of their learning and formation.
How are Educational Technology (EdTech) solutions disrupting traditional learning processes?
As with many industries, the education sector has been eroded with the adoption of digital technology within the overall learning process, and we have seen an escalation of change during the pandemic. There are many areas of change but I see three main areas of disruption from EdTech:
More data-driven insight: with tracking at every stage of the learning process, the potential for bespoke personalised learning will become more prevalent, meaning that everyone could have a unique learning experience.
Globally accessible education: it’s now easier and cheaper than ever before to pick up anything from basic skills to Ivy league type educational instruction. In fact, using platforms like Coursera and edX, you can acquire the equivalent of a university course for free. You may not get the full qualification but this sort of accessibility can help you adapt in these rapidly changing times.
Immersive learning: there’s a concept called blended learning which involves a combination of traditional classroom-based instruction with online learning. I see this evolving to include additional emerging mediums as technology reduces in price. Imagine a virtual reality history lesson where a student is transported to Ancient Greece, homework being supported by prompting chatbots, or language learning being practised with Google Assistant or Alexa.
As we move forward through this pandemic, I’m hopeful that EdTech will support a wider realisation that education doesn’t and shouldn’t just stop when one reaches the end of their formal school or university phase. EdTech solutions will support lifelong and student-driven learning whereby we learn, skill-up, and reskill on a continuous basis.
Young people today have never lived in a world without social media. What are the implications of this for the future of society?
The interference of social media in our actual lives is a very real problem, especially as some allow it to shape our very deepest thoughts. But the use of social media has transcended past real lives and those in the midst of this bubble are likely to have more followers or connections than they do actual friends that they have deep relationships with and know in real life.
In my mind, there are two specific implications for the future:
The first one is that, despite our want and need to control a digital universe, it does exist. Many of us have colleagues that we haven’t met, or suppliers that we only converse with online. So my point here is that we need to be good at conversing selectively where we need to and where it can be beneficial for us, but that we also need to embrace real-life interaction which takes me on to my second point.
Soft skills like communication, critical thinking, emotional intelligence, and teamwork are the very skills that become important in an age of artificial intelligence (AI), machines, and technology. And these are the skills that are put at risk in an age where people live online. It will become increasingly important for us all to over-index in developing these skills to be able to stand out and lead. It’s this juxtaposition of not having the skills because of society but also needing them for society. This needs to be actively addressed. Skills like coding, data analysis, and appreciation of technology will be at the forefront of the minds of those going into education today, but without the soft skills, talented and bright individuals will not reach their full potential in work and life. Parents of children under 18 will need to make an extra effort to coach and support their sons and daughters to adequately prepare them for the future, to actively allow experiences that support personal growth in areas like problem-solving, creative thinking, global citizenship, as well as key interpersonal skills.
What is your take on artificial intelligence — could the risks outweigh the rewards?
We’ve all heard some of the outlandish predictions that surround artificial intelligence, and actually, if you play out how automation and machine learning are evolving, there is some validity that AI could do all of the crazy things we hear about and then some. The reality is that artificial intelligence is already powering our everyday lives: predictive search and algorithmic results when we use Google, recommendations when we use Amazon or Netflix, personalised advertising that follows us around the Internet, price setting on what you might be willing to pay for your Uber, or recalculating your route through traffic on Google Maps — as well as that bespoke experience as you scroll through LinkedIn, Instagram, or TikTok. And if AI is already powering our world and our habits, how do we take back control?
I can’t stress enough how important a technology AI is and will be over the coming years, from disease detection and diagnosis, fully autonomous cars reducing deaths, automated investment of our funds, and virtual tutors supporting teachers in and out of the classroom. If we think about how revolutionary the Internet has been for us, artificial intelligence will build upon that world.
But with revolution comes pain. Over the coming years, we will see much misuse of technology: monitoring of our behaviour in ways that we can’t quite imagine at the moment, and everything from privacy violations and ethical issues, to problems like weapons automation. We also have to face the existential risk that may present itself as AI becomes “superintelligent” — more than humans ourselves. We can’t directly control this risk, but we can be circumspect when controlling our own data, and make sure that we are not a slave to the technology.
https://the.ismaili/global/news/feature ... er-quizlet
Two New Laws Restrict Police Use of DNA Search Method
Maryland and Montana have passed the nation’s first laws limiting forensic genealogy, the method that found the Golden State Killer.
New laws in Maryland and Montana are the first in the nation to restrict law enforcement’s use of genetic genealogy, the DNA matching technique that in 2018 identified the Golden State Killer, in an effort to ensure the genetic privacy of the accused and their relatives.
Beginning on Oct. 1, investigators working on Maryland cases will need a judge’s signoff before using the method, in which a “profile” of thousands of DNA markers from a crime scene is uploaded to genealogy websites to find relatives of the culprit. The new law, sponsored by Democratic lawmakers, also dictates that the technique be used only for serious crimes, such as murder and sexual assault. And it states that investigators may only use websites with strict policies around user consent.
Montana’s new law, sponsored by a Republican, is narrower, requiring that government investigators obtain a search warrant before using a consumer DNA database, unless the consumer has waived the right to privacy.
The laws “demonstrate that people across the political spectrum find law enforcement use of consumer genetic data chilling, concerning and privacy-invasive,” said Natalie Ram, a law professor at the University of Maryland who championed the Maryland law. “I hope to see more states embrace robust regulation of this law enforcement technique in the future.”
More...
https://www.nytimes.com/2021/05/31/scie ... iversified
Maryland and Montana have passed the nation’s first laws limiting forensic genealogy, the method that found the Golden State Killer.
New laws in Maryland and Montana are the first in the nation to restrict law enforcement’s use of genetic genealogy, the DNA matching technique that in 2018 identified the Golden State Killer, in an effort to ensure the genetic privacy of the accused and their relatives.
Beginning on Oct. 1, investigators working on Maryland cases will need a judge’s signoff before using the method, in which a “profile” of thousands of DNA markers from a crime scene is uploaded to genealogy websites to find relatives of the culprit. The new law, sponsored by Democratic lawmakers, also dictates that the technique be used only for serious crimes, such as murder and sexual assault. And it states that investigators may only use websites with strict policies around user consent.
Montana’s new law, sponsored by a Republican, is narrower, requiring that government investigators obtain a search warrant before using a consumer DNA database, unless the consumer has waived the right to privacy.
The laws “demonstrate that people across the political spectrum find law enforcement use of consumer genetic data chilling, concerning and privacy-invasive,” said Natalie Ram, a law professor at the University of Maryland who championed the Maryland law. “I hope to see more states embrace robust regulation of this law enforcement technique in the future.”
More...
https://www.nytimes.com/2021/05/31/scie ... iversified
The Day Facebook Ruined the Internet
Watch video at:
https://www.nytimes.com/2021/06/09/opin ... 778d3e6de3
Your uncle caught a flounder this afternoon. President Biden said something about the Middle East. It’s your boss’s birthday. Your unrequited crush from sophomore year is with some dude on a beach in the Florida Panhandle and drinking a beer.
Feeds, updated in real time and tailored to individual users, have become a standard feature of online social networks. In the Opinion video above, Jacob Hurwitz-Goodman, a Los Angeles-based filmmaker, traces the proliferation of these streams of curated updates to one day in September 2006 — the day Facebook switched on its News Feed.
The News Feed’s launch had a seismic impact on the internet both in the short term — by inducing widespread apoplexy among Facebook users — and in the long term by fundamentally changing the social media landscape and experience. But Mr. Hurwitz-Goodman argues that the News Feed and the internetwide transformations it inspired resulted in not only a decrease in privacy but also a loss of user autonomy and an erosion of a widely shared sense of community.
Watch video at:
https://www.nytimes.com/2021/06/09/opin ... 778d3e6de3
Your uncle caught a flounder this afternoon. President Biden said something about the Middle East. It’s your boss’s birthday. Your unrequited crush from sophomore year is with some dude on a beach in the Florida Panhandle and drinking a beer.
Feeds, updated in real time and tailored to individual users, have become a standard feature of online social networks. In the Opinion video above, Jacob Hurwitz-Goodman, a Los Angeles-based filmmaker, traces the proliferation of these streams of curated updates to one day in September 2006 — the day Facebook switched on its News Feed.
The News Feed’s launch had a seismic impact on the internet both in the short term — by inducing widespread apoplexy among Facebook users — and in the long term by fundamentally changing the social media landscape and experience. But Mr. Hurwitz-Goodman argues that the News Feed and the internetwide transformations it inspired resulted in not only a decrease in privacy but also a loss of user autonomy and an erosion of a widely shared sense of community.
A Covid Test as Easy as Breathing
Scientists have been dreaming of disease-detecting breathalyzers for years. Has the time for the technology finally come?
In May, musicians from dozens of countries descended on Rotterdam, the Netherlands, for the Eurovision Song Contest. Over the course of the competition, the performers — clad in sequined dresses, ornate crowns or, in one case, an enormous pair of angel wings — belted and battled it out for their chance at the title.
But before they were even allowed onstage, they had to pass another test: a breath test.
When they arrived at the venue, the musicians were asked to exhale into a water-bottle-sized device called the SpiroNose, which analyzed the chemical compounds in their breath to detect signatures of a coronavirus infection. If the results came back negative, the performers were cleared to compete.
The SpiroNose, made by the Dutch company Breathomix, is just one of many breath-based Covid-19 tests under development across the world. In May, Singapore’s health agency granted provisional authorization to two such tests, made by the domestic companies Breathonix and Silver Factory Technology. And researchers at Ohio State University say they have applied to the U.S. Food and Drug Administration for an emergency authorization of their Covid-19 breathalyzer.
“It’s clear now, I think, that you can detect this disease with a breath test,” said Paul Thomas, a chemist at Loughborough University in England. “This isn’t science fiction.”
Scientists have long been interested in creating portable devices that can quickly and painlessly screen a person for disease simply by taking a whiff of their breath. But delivering on this dream has proved to be a challenge. Different diseases may cause similar breath changes. Diet can affect the chemicals someone exhales, as can smoking and alcohol consumption, potentially complicating disease detection.
Still, scientists say, advances in sensor technology and machine learning, combined with new research and investment spurred by the pandemic, mean that the moment for disease-detecting breathalyzers may have finally arrived.
More...
https://www.nytimes.com/2021/07/11/heal ... iversified
Scientists have been dreaming of disease-detecting breathalyzers for years. Has the time for the technology finally come?
In May, musicians from dozens of countries descended on Rotterdam, the Netherlands, for the Eurovision Song Contest. Over the course of the competition, the performers — clad in sequined dresses, ornate crowns or, in one case, an enormous pair of angel wings — belted and battled it out for their chance at the title.
But before they were even allowed onstage, they had to pass another test: a breath test.
When they arrived at the venue, the musicians were asked to exhale into a water-bottle-sized device called the SpiroNose, which analyzed the chemical compounds in their breath to detect signatures of a coronavirus infection. If the results came back negative, the performers were cleared to compete.
The SpiroNose, made by the Dutch company Breathomix, is just one of many breath-based Covid-19 tests under development across the world. In May, Singapore’s health agency granted provisional authorization to two such tests, made by the domestic companies Breathonix and Silver Factory Technology. And researchers at Ohio State University say they have applied to the U.S. Food and Drug Administration for an emergency authorization of their Covid-19 breathalyzer.
“It’s clear now, I think, that you can detect this disease with a breath test,” said Paul Thomas, a chemist at Loughborough University in England. “This isn’t science fiction.”
Scientists have long been interested in creating portable devices that can quickly and painlessly screen a person for disease simply by taking a whiff of their breath. But delivering on this dream has proved to be a challenge. Different diseases may cause similar breath changes. Diet can affect the chemicals someone exhales, as can smoking and alcohol consumption, potentially complicating disease detection.
Still, scientists say, advances in sensor technology and machine learning, combined with new research and investment spurred by the pandemic, mean that the moment for disease-detecting breathalyzers may have finally arrived.
More...
https://www.nytimes.com/2021/07/11/heal ... iversified
Richard Branson Launches Into Space on Virgin Galactic Flight
The 70-year-old British billionaire and crew members of Virgin Galactic launched the commercial space plane Unity from New Mexico, reached the edge of space and landed safely back at the spaceport on Sunday.
Watch video at:
https://www.nytimes.com/video/science/s ... 778d3e6de3
The 70-year-old British billionaire and crew members of Virgin Galactic launched the commercial space plane Unity from New Mexico, reached the edge of space and landed safely back at the spaceport on Sunday.
Watch video at:
https://www.nytimes.com/video/science/s ... 778d3e6de3
W.H.O. Experts Seek Limits on Human Gene-Editing Experiments
The panel also called on countries to ensure that beneficial forms of genetic alteration be shared equitably.
A committee of experts working with the World Health Organization on Monday called on the nations of the world to set stronger limits on powerful methods of human gene editing.
Their recommendations, made after two years of deliberation, aim to head off rogue science experiments with the human genome, and ensure that proper uses of gene-editing techniques are beneficial to the broader public, particularly people in developing countries, and not only the wealthy.
“I am very supportive,” said Dr. Leonard Zon, a gene therapy expert at Harvard University who was not a member of the committee, but called it a “thoughtful group.” Recent gene-editing results are “impressive,” he said, and the committee’s recommendations will be “very important for therapy in the future.”
The guidelines proposed by the W.H.O. committee were prompted in large part by the case of He Jiankui, a scientist in China who stunned the world in November 2018 when he announced he had altered the DNA of human embryos using CRISPR, a technique that allows precision editing of genes. Such alterations meant that any changes that occurred in the genes would be replicated in every cell of the embryo, including sperm and egg cells. And that meant that the alterations, even if they were deleterious instead of helpful, would arise not just in the babies born after gene editing but in every generation their DNA was passed on to.
More...
https://www.nytimes.com/2021/07/12/scie ... iversified
The panel also called on countries to ensure that beneficial forms of genetic alteration be shared equitably.
A committee of experts working with the World Health Organization on Monday called on the nations of the world to set stronger limits on powerful methods of human gene editing.
Their recommendations, made after two years of deliberation, aim to head off rogue science experiments with the human genome, and ensure that proper uses of gene-editing techniques are beneficial to the broader public, particularly people in developing countries, and not only the wealthy.
“I am very supportive,” said Dr. Leonard Zon, a gene therapy expert at Harvard University who was not a member of the committee, but called it a “thoughtful group.” Recent gene-editing results are “impressive,” he said, and the committee’s recommendations will be “very important for therapy in the future.”
The guidelines proposed by the W.H.O. committee were prompted in large part by the case of He Jiankui, a scientist in China who stunned the world in November 2018 when he announced he had altered the DNA of human embryos using CRISPR, a technique that allows precision editing of genes. Such alterations meant that any changes that occurred in the genes would be replicated in every cell of the embryo, including sperm and egg cells. And that meant that the alterations, even if they were deleterious instead of helpful, would arise not just in the babies born after gene editing but in every generation their DNA was passed on to.
More...
https://www.nytimes.com/2021/07/12/scie ... iversified
Tapping into the Brain to Help a Paralyzed Man Speak
In a once unimagined accomplishment, electrodes implanted in the man’s brain transmit signals to a computer that displays his words.
He has not been able to speak since 2003, when he was paralyzed at age 20 by a severe stroke after a terrible car crash.
Now, in a scientific milestone, researchers have tapped into the speech areas of his brain — allowing him to produce comprehensible words and sentences simply by trying to say them. When the man, known by his nickname, Pancho, tries to speak, electrodes implanted in his brain transmit signals to a computer that displays them on the screen.
His first recognizable sentence, researchers said, was, “My family is outside.”
The achievement, published on Wednesday in the New England Journal of Medicine, could eventually help many patients with conditions that steal their ability to talk.
“This is farther than we’ve ever imagined we could go,” said Melanie Fried-Oken, a professor of neurology and pediatrics at Oregon Health & Science University, who was not involved in the project.
Photos and more...
https://www.nytimes.com/2021/07/14/heal ... iversified
In a once unimagined accomplishment, electrodes implanted in the man’s brain transmit signals to a computer that displays his words.
He has not been able to speak since 2003, when he was paralyzed at age 20 by a severe stroke after a terrible car crash.
Now, in a scientific milestone, researchers have tapped into the speech areas of his brain — allowing him to produce comprehensible words and sentences simply by trying to say them. When the man, known by his nickname, Pancho, tries to speak, electrodes implanted in his brain transmit signals to a computer that displays them on the screen.
His first recognizable sentence, researchers said, was, “My family is outside.”
The achievement, published on Wednesday in the New England Journal of Medicine, could eventually help many patients with conditions that steal their ability to talk.
“This is farther than we’ve ever imagined we could go,” said Melanie Fried-Oken, a professor of neurology and pediatrics at Oregon Health & Science University, who was not involved in the project.
Photos and more...
https://www.nytimes.com/2021/07/14/heal ... iversified
What Ever Happened to IBM’s Watson?
IBM’s artificial intelligence was supposed to transform industries and generate riches for the company. Neither has panned out. Now, IBM has settled on a humbler vision for Watson.
A decade ago, IBM’s public confidence was unmistakable. Its Watson supercomputer had just trounced Ken Jennings, the best human “Jeopardy!” player ever, showcasing the power of artificial intelligence. This was only the beginning of a technological revolution about to sweep through society, the company pledged.
“Already,” IBM declared in an advertisement the day after the Watson victory, “we are exploring ways to apply Watson skills to the rich, varied language of health care, finance, law and academia.”
But inside the company, the star scientist behind Watson had a warning: Beware what you promise.
David Ferrucci, the scientist, explained that Watson was engineered to identify word patterns and predict correct answers for the trivia game. It was not an all-purpose answer box ready to take on the commercial world, he said. It might well fail a second-grade reading comprehension test.
His explanation got a polite hearing from business colleagues, but little more.
“It wasn’t the marketing message,” recalled Mr. Ferrucci, who left IBM the following year.
It was, however, a prescient message.
IBM poured many millions of dollars in the next few years into promoting Watson as a benevolent digital assistant that would help hospitals and farms as well as offices and factories. The potential uses, IBM suggested, were boundless, from spotting new market opportunities to tackling cancer and climate change. An IBM report called it “the future of knowing.”
IBM’s television ads included playful chats Watson had with Serena Williams and Bob Dylan. Watson was featured on “60 Minutes.” For many people, Watson became synonymous with A.I.
And Watson wasn’t just going to change industries. It was going to breathe new life into IBM — a giant company, but one dependent on its legacy products. Inside IBM, Watson was thought of as a technology that could do for the company what the mainframe computer once did — provide an engine of growth and profits for years, even decades.
Watson has not remade any industries. And it hasn’t lifted IBM’s fortunes. The company trails rivals that emerged as the leaders in cloud computing and A.I. — Amazon, Microsoft and Google. While the shares of those three have multiplied in value many times, IBM’s stock price is down more than 10 percent since Watson’s “Jeopardy!” triumph in 2011.
The company’s missteps with Watson began with its early emphasis on big and difficult initiatives intended to generate both acclaim and sizable revenue for the company, according to many of the more than a dozen current and former IBM managers and scientists interviewed for this article. Several of those people asked not to be named because they had not been authorized to speak or still had business ties to IBM.
More...
https://www.nytimes.com/2021/07/16/tech ... 778d3e6de3
IBM’s artificial intelligence was supposed to transform industries and generate riches for the company. Neither has panned out. Now, IBM has settled on a humbler vision for Watson.
A decade ago, IBM’s public confidence was unmistakable. Its Watson supercomputer had just trounced Ken Jennings, the best human “Jeopardy!” player ever, showcasing the power of artificial intelligence. This was only the beginning of a technological revolution about to sweep through society, the company pledged.
“Already,” IBM declared in an advertisement the day after the Watson victory, “we are exploring ways to apply Watson skills to the rich, varied language of health care, finance, law and academia.”
But inside the company, the star scientist behind Watson had a warning: Beware what you promise.
David Ferrucci, the scientist, explained that Watson was engineered to identify word patterns and predict correct answers for the trivia game. It was not an all-purpose answer box ready to take on the commercial world, he said. It might well fail a second-grade reading comprehension test.
His explanation got a polite hearing from business colleagues, but little more.
“It wasn’t the marketing message,” recalled Mr. Ferrucci, who left IBM the following year.
It was, however, a prescient message.
IBM poured many millions of dollars in the next few years into promoting Watson as a benevolent digital assistant that would help hospitals and farms as well as offices and factories. The potential uses, IBM suggested, were boundless, from spotting new market opportunities to tackling cancer and climate change. An IBM report called it “the future of knowing.”
IBM’s television ads included playful chats Watson had with Serena Williams and Bob Dylan. Watson was featured on “60 Minutes.” For many people, Watson became synonymous with A.I.
And Watson wasn’t just going to change industries. It was going to breathe new life into IBM — a giant company, but one dependent on its legacy products. Inside IBM, Watson was thought of as a technology that could do for the company what the mainframe computer once did — provide an engine of growth and profits for years, even decades.
Watson has not remade any industries. And it hasn’t lifted IBM’s fortunes. The company trails rivals that emerged as the leaders in cloud computing and A.I. — Amazon, Microsoft and Google. While the shares of those three have multiplied in value many times, IBM’s stock price is down more than 10 percent since Watson’s “Jeopardy!” triumph in 2011.
The company’s missteps with Watson began with its early emphasis on big and difficult initiatives intended to generate both acclaim and sizable revenue for the company, according to many of the more than a dozen current and former IBM managers and scientists interviewed for this article. Several of those people asked not to be named because they had not been authorized to speak or still had business ties to IBM.
More...
https://www.nytimes.com/2021/07/16/tech ... 778d3e6de3
Re: TECHNOLOGY AND DEVELOPMENT
Artificial Intelligence Helps Solve One of Archaeology’s Biggest Mysteries
AI is transforming archaeology, uncovering hidden geoglyphs and solving ancient mysteries at an unprecedented speed—offering new insights into Earth’s long-lost civilizations.

For centuries, vast and intricate patterns etched into the earth have baffled archaeologists, but now, artificial intelligence (AI) is helping to crack one of archaeology’s greatest puzzles.
Working alongside IBM scientists, researchers are using AI to analyze and interpret vast swathes of aerial imagery, revealing hidden geoglyphs and offering new insights into the purpose of ancient designs.
This breakthrough, highlighted in an article by BBC Science Focus https://www.sciencefocus.com/planet-ear ... cal-puzzle, is revolutionizing the way archaeologists approach ancient mysteries, allowing them to process data faster and more efficiently than ever before.
Artificial Intelligence Unlocks New Discoveries
The breakthrough came when archaeologists, led by Prof Masato Sakai from Yamagata University, began applying AI to the vast desert landscape of the Nazca region. AI systems trained to recognize geoglyphs in aerial imagery identified over 300 new figurative geoglyphs in just six months, almost doubling the previously known total.
AI’s ability to process enormous quantities of satellite, drone, and aerial imagery at lightning speed is revolutionizing the way archaeologists conduct surveys. This method has dramatically accelerated the discovery process, allowing researchers to identify geoglyphs that would have taken years to find using traditional fieldwork methods.
By automating the analysis of massive datasets, AI has allowed researchers to take a broader view of archaeological sites. Previously, the labor-intensive process of surveying, mapping, and analyzing sites required years of manual work, limiting the scale of discovery.
AI has removed these barriers, enabling archaeologists to detect patterns, structures, and features that were once invisible to the human eye. The integration of machine learning and advanced imaging techniques is shifting archaeology into a new era of discovery, where large-scale mapping is now possible in a fraction of the time.
AI and the Third Technological Era of Archaeology
The Nazca Lines are not the only archaeological site benefiting from AI technology. As AI continues to evolve, it is being used across a wide range of archaeological projects, from burial mounds to shipwrecks.
AI excels at efficiently processing massive amounts of data, which would otherwise overwhelm human researchers. The ability of AI to detect patterns in vast swathes of aerial imagery is making it an indispensable tool for modern archaeologists.
Over the past decade, digital tools have become more integrated into archaeological research. Early advances in 3D modeling and remote sensing have laid the groundwork for AI-driven investigations.
The shift from traditional surveying methods to AI-assisted analysis has dramatically expanded the ability to map, record, and visualize historical sites in previously inaccessible areas.
AI is now refining the accuracy of archaeological discoveries, offering new insights into landscape use, settlement patterns, and cultural evolution across different time periods.
Nazca Lines

Research suggests that the Nazca used ancient units of measurement to achieve the “perfect” proportions. – Illustration credit: Amplitude Studios
Discovering Hidden Historical Sites Beyond the Nazca Lines
The potential of AI in archaeology isn’t confined to the Nazca Lines alone. AI has been employed in various regions to locate ancient settlements, hillforts, and shipwrecks, particularly in areas that are difficult to access or where fieldwork is restricted.
One such project, a collaboration between archaeologists and AI specialists, combined machine learning with remote sensing to help detect hillforts—enclosed settlements found on hilltops—based on aerial survey data. This collaboration demonstrated how AI could identify patterns in the landscape that would have been nearly impossible to detect using conventional methods.
In addition to identifying lost cities or settlements, AI has helped archaeologists pinpoint cultural artifacts, ancient ruins, and environmental changes that influenced human migration. In regions where conflict or extreme weather makes fieldwork dangerous, AI has become a vital tool for conducting archaeological research remotely.
AI-powered analysis allows researchers to extract information from aerial photographs, LiDAR scans, and historical maps, making it possible to study sites that were previously inaccessible.
The Future of Archaeology: Saving the Past with AI
While AI is proving to be a valuable asset in archaeology, it is not without its challenges. Prof Masato Sakai and his team scrutinized an average of 36 AI-generated suggestions to find one valid geoglyph.
Despite this, the sheer speed at which AI processes data enables archaeologists to prioritize promising leads, ensuring that their efforts are directed toward the most significant discoveries. The rapid evolution of AI-driven tools continues to refine the accuracy of archaeological research, leading to more efficient excavation strategies and preservation efforts.
The application of AI in archaeology is still in its early stages, but its potential is already becoming clear. As these technologies improve, they will enable archaeologists to uncover hidden civilizations, lost artifacts, and long-forgotten landscapes.
The combination of AI, machine learning, and remote sensing is not only accelerating discoveries but also ensuring that endangered sites are identified and preserved before they are lost forever. AI is not replacing archaeologists—it is amplifying their ability to explore, analyze, and protect humanity’s shared history.
https://indiandefencereview.com/artific ... chaeology/
AI is transforming archaeology, uncovering hidden geoglyphs and solving ancient mysteries at an unprecedented speed—offering new insights into Earth’s long-lost civilizations.

For centuries, vast and intricate patterns etched into the earth have baffled archaeologists, but now, artificial intelligence (AI) is helping to crack one of archaeology’s greatest puzzles.
Working alongside IBM scientists, researchers are using AI to analyze and interpret vast swathes of aerial imagery, revealing hidden geoglyphs and offering new insights into the purpose of ancient designs.
This breakthrough, highlighted in an article by BBC Science Focus https://www.sciencefocus.com/planet-ear ... cal-puzzle, is revolutionizing the way archaeologists approach ancient mysteries, allowing them to process data faster and more efficiently than ever before.
Artificial Intelligence Unlocks New Discoveries
The breakthrough came when archaeologists, led by Prof Masato Sakai from Yamagata University, began applying AI to the vast desert landscape of the Nazca region. AI systems trained to recognize geoglyphs in aerial imagery identified over 300 new figurative geoglyphs in just six months, almost doubling the previously known total.
AI’s ability to process enormous quantities of satellite, drone, and aerial imagery at lightning speed is revolutionizing the way archaeologists conduct surveys. This method has dramatically accelerated the discovery process, allowing researchers to identify geoglyphs that would have taken years to find using traditional fieldwork methods.
By automating the analysis of massive datasets, AI has allowed researchers to take a broader view of archaeological sites. Previously, the labor-intensive process of surveying, mapping, and analyzing sites required years of manual work, limiting the scale of discovery.
AI has removed these barriers, enabling archaeologists to detect patterns, structures, and features that were once invisible to the human eye. The integration of machine learning and advanced imaging techniques is shifting archaeology into a new era of discovery, where large-scale mapping is now possible in a fraction of the time.
AI and the Third Technological Era of Archaeology
The Nazca Lines are not the only archaeological site benefiting from AI technology. As AI continues to evolve, it is being used across a wide range of archaeological projects, from burial mounds to shipwrecks.
AI excels at efficiently processing massive amounts of data, which would otherwise overwhelm human researchers. The ability of AI to detect patterns in vast swathes of aerial imagery is making it an indispensable tool for modern archaeologists.
Over the past decade, digital tools have become more integrated into archaeological research. Early advances in 3D modeling and remote sensing have laid the groundwork for AI-driven investigations.
The shift from traditional surveying methods to AI-assisted analysis has dramatically expanded the ability to map, record, and visualize historical sites in previously inaccessible areas.
AI is now refining the accuracy of archaeological discoveries, offering new insights into landscape use, settlement patterns, and cultural evolution across different time periods.
Nazca Lines

Research suggests that the Nazca used ancient units of measurement to achieve the “perfect” proportions. – Illustration credit: Amplitude Studios
Discovering Hidden Historical Sites Beyond the Nazca Lines
The potential of AI in archaeology isn’t confined to the Nazca Lines alone. AI has been employed in various regions to locate ancient settlements, hillforts, and shipwrecks, particularly in areas that are difficult to access or where fieldwork is restricted.
One such project, a collaboration between archaeologists and AI specialists, combined machine learning with remote sensing to help detect hillforts—enclosed settlements found on hilltops—based on aerial survey data. This collaboration demonstrated how AI could identify patterns in the landscape that would have been nearly impossible to detect using conventional methods.
In addition to identifying lost cities or settlements, AI has helped archaeologists pinpoint cultural artifacts, ancient ruins, and environmental changes that influenced human migration. In regions where conflict or extreme weather makes fieldwork dangerous, AI has become a vital tool for conducting archaeological research remotely.
AI-powered analysis allows researchers to extract information from aerial photographs, LiDAR scans, and historical maps, making it possible to study sites that were previously inaccessible.
The Future of Archaeology: Saving the Past with AI
While AI is proving to be a valuable asset in archaeology, it is not without its challenges. Prof Masato Sakai and his team scrutinized an average of 36 AI-generated suggestions to find one valid geoglyph.
Despite this, the sheer speed at which AI processes data enables archaeologists to prioritize promising leads, ensuring that their efforts are directed toward the most significant discoveries. The rapid evolution of AI-driven tools continues to refine the accuracy of archaeological research, leading to more efficient excavation strategies and preservation efforts.
The application of AI in archaeology is still in its early stages, but its potential is already becoming clear. As these technologies improve, they will enable archaeologists to uncover hidden civilizations, lost artifacts, and long-forgotten landscapes.
The combination of AI, machine learning, and remote sensing is not only accelerating discoveries but also ensuring that endangered sites are identified and preserved before they are lost forever. AI is not replacing archaeologists—it is amplifying their ability to explore, analyze, and protect humanity’s shared history.
https://indiandefencereview.com/artific ... chaeology/
Re: TECHNOLOGY AND DEVELOPMENT
Archaeologists Make Groundbreaking Discovery of a 5,000-Year-Old Fortress in Romania Lost for Millennia
A 5,000-year-old fortress has been uncovered deep in the forests of Romania, hidden by centuries of thick vegetation. Advanced LiDAR technology revealed intricate details that had been obscured for millennia.

Archaeologists Make Groundbreaking Discovery of a 5,000-Year-Old Fortress in Romania Lost for Millennia | The Daily Galaxy --Great Discoveries Channel
In a remarkable breakthrough, archaeologists have uncovered the remains of a 5,000-year-old fortress in the forests of Neamț County, Romania. This significant discovery was made possible by the use of advanced LiDAR technology, which enabled researchers to capture the details of the structure despite its location deep in the forest, obscured by dense vegetation. According to Popularmechanics, the fortress is believed to date back to the transition from the Neolithic period to the Bronze Age.
LiDAR Technology Brings Ancient Fortification to Light
LiDAR (Light Detection and Ranging) technology, which uses laser pulses to measure distances and create high-resolution models of the terrain, played a crucial role in revealing the ancient fortification.
This non-invasive method allowed researchers to “See” through the dense forest cover and map the fortress with precision. Vasile Diaconu, an archaeologist involved in the study, noted that LiDAR scans provided a clear image of the almost 5,000-year-old structure, showing details that could not have been observed in the field due to the thick vegetation.
By using drones equipped with LiDAR, the research team generated an aerial view of the site, uncovering intricate details of the fortification. The discovery revealed that the fortress was elaborate and well-planned, featuring defensive features like large ditches and earthen mounds. These structures would have enhanced the fort’s defensive capabilities, suggesting that it was a strategically important site.
Lidar the Strategic Significance of the Fortress
The results from the LiDAR scans indicate that the fortress was located in a high area, offering excellent visibility of the surrounding landscape. This strategic positioning would have made it easier for the inhabitants to detect approaching threats.
Additionally, the fortifications were reinforced by extensive ditches, some of which were several hundred meters long, highlighting the considerable human effort required for their construction. These ditches, along with the earthen mounds, would have been key to the fort’s defensive strength.
Vasile Diaconu explained that these findings show the complex nature of the site, underscoring the importance of using modern technologies like LiDAR to gain a better understanding of ancient sites. Without these tools, the details of such sites would remain hidden, making it much more difficult to study and interpret ancient civilizations.
Geocad Services

Image Credit: Geocad Services
Modern Technology’s Essential Role in Archaeology
The discovery in Romania highlights the growing role of technology in archaeology. LiDAR is particularly valuable because it allows archaeologists to map large, inaccessible areas without disturbing the ground or the ruins themselves.
The ability to capture detailed data from above means that researchers can uncover lost cities, ancient fortifications, and even landscapes that were previously concealed by nature. Diaconu emphasized that modern tools are becoming essential for understanding the complexities of archaeological sites.
“Only by using modern technologies will we be able to better understand the complexities of archaeological sites,” he stated.
The use of these technologies is enabling researchers to explore and document sites that would otherwise remain hidden, providing new insights into ancient cultures.
A Personal Connection: Teacher and Student Collaboration
This archaeological project also had a personal dimension for Vasile Diaconu. The study was the result of a collaboration between Diaconu and his former student, Vlad Dulgheriu, who is now the owner of Geocad Services, the company that helped make the use of LiDAR technology possible.
Dulgheriu’s interest in his mentor’s work led to the partnership, and together they were able to bring this ancient site to light. Diaconu expressed pride in his former student’s achievements, saying:
“I’m honestly glad my former student has built his own road beautifully.”
As technology continues to advance, more discoveries like this one may soon follow. The use of LiDAR in archaeology is transforming the way researchers explore ancient sites, offering an unobstructed view of historical structures that were once hidden from sight.
https://dailygalaxy.com/2025/03/archaeo ... millennia/
A 5,000-year-old fortress has been uncovered deep in the forests of Romania, hidden by centuries of thick vegetation. Advanced LiDAR technology revealed intricate details that had been obscured for millennia.

Archaeologists Make Groundbreaking Discovery of a 5,000-Year-Old Fortress in Romania Lost for Millennia | The Daily Galaxy --Great Discoveries Channel
In a remarkable breakthrough, archaeologists have uncovered the remains of a 5,000-year-old fortress in the forests of Neamț County, Romania. This significant discovery was made possible by the use of advanced LiDAR technology, which enabled researchers to capture the details of the structure despite its location deep in the forest, obscured by dense vegetation. According to Popularmechanics, the fortress is believed to date back to the transition from the Neolithic period to the Bronze Age.
LiDAR Technology Brings Ancient Fortification to Light
LiDAR (Light Detection and Ranging) technology, which uses laser pulses to measure distances and create high-resolution models of the terrain, played a crucial role in revealing the ancient fortification.
This non-invasive method allowed researchers to “See” through the dense forest cover and map the fortress with precision. Vasile Diaconu, an archaeologist involved in the study, noted that LiDAR scans provided a clear image of the almost 5,000-year-old structure, showing details that could not have been observed in the field due to the thick vegetation.
By using drones equipped with LiDAR, the research team generated an aerial view of the site, uncovering intricate details of the fortification. The discovery revealed that the fortress was elaborate and well-planned, featuring defensive features like large ditches and earthen mounds. These structures would have enhanced the fort’s defensive capabilities, suggesting that it was a strategically important site.
Lidar the Strategic Significance of the Fortress
The results from the LiDAR scans indicate that the fortress was located in a high area, offering excellent visibility of the surrounding landscape. This strategic positioning would have made it easier for the inhabitants to detect approaching threats.
Additionally, the fortifications were reinforced by extensive ditches, some of which were several hundred meters long, highlighting the considerable human effort required for their construction. These ditches, along with the earthen mounds, would have been key to the fort’s defensive strength.
Vasile Diaconu explained that these findings show the complex nature of the site, underscoring the importance of using modern technologies like LiDAR to gain a better understanding of ancient sites. Without these tools, the details of such sites would remain hidden, making it much more difficult to study and interpret ancient civilizations.
Geocad Services

Image Credit: Geocad Services
Modern Technology’s Essential Role in Archaeology
The discovery in Romania highlights the growing role of technology in archaeology. LiDAR is particularly valuable because it allows archaeologists to map large, inaccessible areas without disturbing the ground or the ruins themselves.
The ability to capture detailed data from above means that researchers can uncover lost cities, ancient fortifications, and even landscapes that were previously concealed by nature. Diaconu emphasized that modern tools are becoming essential for understanding the complexities of archaeological sites.
“Only by using modern technologies will we be able to better understand the complexities of archaeological sites,” he stated.
The use of these technologies is enabling researchers to explore and document sites that would otherwise remain hidden, providing new insights into ancient cultures.
A Personal Connection: Teacher and Student Collaboration
This archaeological project also had a personal dimension for Vasile Diaconu. The study was the result of a collaboration between Diaconu and his former student, Vlad Dulgheriu, who is now the owner of Geocad Services, the company that helped make the use of LiDAR technology possible.
Dulgheriu’s interest in his mentor’s work led to the partnership, and together they were able to bring this ancient site to light. Diaconu expressed pride in his former student’s achievements, saying:
“I’m honestly glad my former student has built his own road beautifully.”
As technology continues to advance, more discoveries like this one may soon follow. The use of LiDAR in archaeology is transforming the way researchers explore ancient sites, offering an unobstructed view of historical structures that were once hidden from sight.
https://dailygalaxy.com/2025/03/archaeo ... millennia/
Re: TECHNOLOGY AND DEVELOPMENT
Pig Kidney Removed From Alabama Woman After Organ Rejection
Towana Looney lived with the kidney longer than any other transplant patient had tolerated an organ from a genetically modified animal.

Towana Looney in December at NYU Langone. “Though the outcome is not what anyone wanted, I know a lot was learned from my 130 days with a pig kidney,” she said in a statement.Credit...Jackie Molloy for The New York Times
Surgeons removed a genetically engineered pig’s kidney from an Alabama woman after she experienced acute organ rejection, NYU Langone Health officials said on Friday.
Towana Looney, 53, lived with the kidney for 130 days, which is longer than anyone else has tolerated an organ from a genetically modified animal. She has resumed dialysis, hospital officials said.
Dr. Robert Montgomery, Ms. Looney’s surgeon and the director of the NYU Langone Transplant Institute, said that the so-called explant was not a setback for the field of xenotransplantation — the effort to use organs from animals to replace those that have failed in humans.
“This is the longest one of these organs has lasted,” he said in an interview, adding that Ms. Looney had other medical conditions that might have complicated her prognosis.
“All this takes time,” he said. “This game is going to be won by incremental improvements, singles and doubles, not trying to swing for the fences and get a home run.”
//More on Organ Transplants
//Skipped Over in Line: The sickest patients are supposed to get priority for lifesaving transplants. But more and more, they are being skipped https://www.nytimes.com/interactive/202 ... ients.html.
//Pig Organ Transplants: The U.S. Food and Drug Administration has given the green light to two biotechnology companies for clinical trials that will transplant organs from genetically modified pigs into people with kidney failure https://www.nytimes.com/2025/02/03/heal ... rials.html.
//Organ in a Box: Perfusion keeps a donated organ alive outside the body, giving surgeons extra time and increasing the number of transplants possible https://www.nytimes.com/2024/04/02/heal ... usion.html.
//Harvesting Organs: A new method for retrieving hearts from organ donors has ignited a debate over the surprisingly blurry line between life and death in a hospital https://www.nytimes.com/2023/11/22/nyre ... -dead.html.
Further treatment of Ms. Looney might have salvaged the organ, but she and her medical team decided against it, Dr. Montgomery said.
“No. 1 is safety — we needed to be sure that she was going to be OK,” he said.
Another patient, Tim Andrews of Concord, N.H., has been living with a kidney from a genetically modified pig since Jan. 25. He has been hospitalized twice for biopsies, doctors at Massachusetts General Hospital in Boston said.
Two other patients who received similar kidneys in recent years died, as did two patients given hearts from genetically modified pigs.
Ms. Looney, who has returned to her home in Alabama after coming to New York for treatment and was not available for comment, said in a statement that she was grateful for the opportunity to participate in the groundbreaking procedure.
“For the first time since 2016, I enjoyed time with friends and family without planning around dialysis treatments,” Ms. Looney said in a statement provided by NYU Langone.
“Though the outcome is not what anyone wanted, I know a lot was learned from my 130 days with a pig kidney — and that this can help and inspire many others in their journey to overcome kidney disease,” she said.
Hospital officials said that Ms. Looney’s kidney function dropped after she experienced rejection of the organ. The cause was being investigated, Dr. Montgomery said.
But the response followed a reduction in immunosuppressive medications she had been taking, done in order to treat an unrelated infection, he added.
The first sign of trouble was a blood test done in Alabama that showed Ms. Looney had elevated levels of creatinine, a waste product that is removed from the blood through the kidneys. Elevated levels signal there may be a problem with kidney function.
Ms. Looney was admitted to the hospital, but when her creatinine levels continued to climb, she flew to New York, where doctors biopsied the kidney and found clear signs of rejection, Dr. Montgomery said.
The kidney was removed last Friday, hospital officials said.
“The decision was made by Ms. Looney and her doctors that the safest intervention would be to remove the kidney and return to dialysis rather than giving additional immunosuppression,” Dr. Montgomery said in a statement.
United Therapeutics Corporation, the biotech company that produced the pig that provided Ms. Looney’s kidney, thanked her for her bravery and said that the organ appeared to function well until the rejection.
The company expects to start a clinical trial of pig-kidney transplantation this year, starting with six patients and eventually growing to 50 patients.
Pig organs are seen as a potential solution to the shortage of donated organs, especially kidneys. More than 550,000 Americans have kidney failure and require dialysis, and about 100,000 of them are on a waiting list to receive a kidney.
But there is an acute need for human organs, and fewer than 25,000 transplants were performed in 2023. Many patients die while waiting.
https://www.nytimes.com/2025/04/11/heal ... e9677ea768
Towana Looney lived with the kidney longer than any other transplant patient had tolerated an organ from a genetically modified animal.

Towana Looney in December at NYU Langone. “Though the outcome is not what anyone wanted, I know a lot was learned from my 130 days with a pig kidney,” she said in a statement.Credit...Jackie Molloy for The New York Times
Surgeons removed a genetically engineered pig’s kidney from an Alabama woman after she experienced acute organ rejection, NYU Langone Health officials said on Friday.
Towana Looney, 53, lived with the kidney for 130 days, which is longer than anyone else has tolerated an organ from a genetically modified animal. She has resumed dialysis, hospital officials said.
Dr. Robert Montgomery, Ms. Looney’s surgeon and the director of the NYU Langone Transplant Institute, said that the so-called explant was not a setback for the field of xenotransplantation — the effort to use organs from animals to replace those that have failed in humans.
“This is the longest one of these organs has lasted,” he said in an interview, adding that Ms. Looney had other medical conditions that might have complicated her prognosis.
“All this takes time,” he said. “This game is going to be won by incremental improvements, singles and doubles, not trying to swing for the fences and get a home run.”
//More on Organ Transplants
//Skipped Over in Line: The sickest patients are supposed to get priority for lifesaving transplants. But more and more, they are being skipped https://www.nytimes.com/interactive/202 ... ients.html.
//Pig Organ Transplants: The U.S. Food and Drug Administration has given the green light to two biotechnology companies for clinical trials that will transplant organs from genetically modified pigs into people with kidney failure https://www.nytimes.com/2025/02/03/heal ... rials.html.
//Organ in a Box: Perfusion keeps a donated organ alive outside the body, giving surgeons extra time and increasing the number of transplants possible https://www.nytimes.com/2024/04/02/heal ... usion.html.
//Harvesting Organs: A new method for retrieving hearts from organ donors has ignited a debate over the surprisingly blurry line between life and death in a hospital https://www.nytimes.com/2023/11/22/nyre ... -dead.html.
Further treatment of Ms. Looney might have salvaged the organ, but she and her medical team decided against it, Dr. Montgomery said.
“No. 1 is safety — we needed to be sure that she was going to be OK,” he said.
Another patient, Tim Andrews of Concord, N.H., has been living with a kidney from a genetically modified pig since Jan. 25. He has been hospitalized twice for biopsies, doctors at Massachusetts General Hospital in Boston said.
Two other patients who received similar kidneys in recent years died, as did two patients given hearts from genetically modified pigs.
Ms. Looney, who has returned to her home in Alabama after coming to New York for treatment and was not available for comment, said in a statement that she was grateful for the opportunity to participate in the groundbreaking procedure.
“For the first time since 2016, I enjoyed time with friends and family without planning around dialysis treatments,” Ms. Looney said in a statement provided by NYU Langone.
“Though the outcome is not what anyone wanted, I know a lot was learned from my 130 days with a pig kidney — and that this can help and inspire many others in their journey to overcome kidney disease,” she said.
Hospital officials said that Ms. Looney’s kidney function dropped after she experienced rejection of the organ. The cause was being investigated, Dr. Montgomery said.
But the response followed a reduction in immunosuppressive medications she had been taking, done in order to treat an unrelated infection, he added.
The first sign of trouble was a blood test done in Alabama that showed Ms. Looney had elevated levels of creatinine, a waste product that is removed from the blood through the kidneys. Elevated levels signal there may be a problem with kidney function.
Ms. Looney was admitted to the hospital, but when her creatinine levels continued to climb, she flew to New York, where doctors biopsied the kidney and found clear signs of rejection, Dr. Montgomery said.
The kidney was removed last Friday, hospital officials said.
“The decision was made by Ms. Looney and her doctors that the safest intervention would be to remove the kidney and return to dialysis rather than giving additional immunosuppression,” Dr. Montgomery said in a statement.
United Therapeutics Corporation, the biotech company that produced the pig that provided Ms. Looney’s kidney, thanked her for her bravery and said that the organ appeared to function well until the rejection.
The company expects to start a clinical trial of pig-kidney transplantation this year, starting with six patients and eventually growing to 50 patients.
Pig organs are seen as a potential solution to the shortage of donated organs, especially kidneys. More than 550,000 Americans have kidney failure and require dialysis, and about 100,000 of them are on a waiting list to receive a kidney.
But there is an acute need for human organs, and fewer than 25,000 transplants were performed in 2023. Many patients die while waiting.
https://www.nytimes.com/2025/04/11/heal ... e9677ea768
Re: TECHNOLOGY AND DEVELOPMENT
How Japan Built a 3D-Printed Train Station in 6 Hours
As Japan’s population shrinks, maintaining rail service in remote small towns is becoming a challenge. Is this the answer?

By Kiuko NotoyaPhotographs and Video by Noriko Hayashi
Reporting from Arida, Japan
April 8, 2025
Leer en español
In the six hours between the departure of the night’s last train and the arrival of the morning’s first one, workers in rural Japan built an entirely new train station. It will replace a significantly bigger wooden structure that has served commuters in this remote community for over 75 years.
The new station’s components were 3D-printed elsewhere and assembled on site last month, in what the railway’s operators say is a world first. It may look more like a shelter than a station, but building one the traditional way would have taken more than two months and cost twice as much, according to the West Japan Railway Company.
As Japan’s population ages and its work force shrinks, the maintenance of railway infrastructure, including outdated station buildings, is a growing issue for railway operators. Rural stations with dwindling numbers of users have posed a particular challenge.
The new station, Hatsushima, is in a quiet seaside town that’s part of Arida, a 25,000-population city in Wakayama Prefecture, which borders two popular tourist destinations, Osaka and Nara prefectures. The station, served by a single line with trains that run one to three times an hour, serves around 530 riders a day.
Yui Nishino, 19, uses it every day for her commute to university. She said she was surprised when she first heard that the world’s first 3D-printed station building was going to be built here.
“Watching it, the work is progressing at a speed that would be impossible with normal construction,” she said. “I hope that they can make more buildings with 3D-printing technology.”
Video https://vp.nyt.com/video/2025/03/27/136 ... _1080p.mp4
Timelapse video from Serendix, the company that made the station’s parts, shows the seven-day process.CreditCredit...Serendix Inc.
Serendix, the construction firm that worked with West Japan Railway the project, said printing the parts and reinforcing them with concrete took seven days.
The printing was done at a factory in Kumamoto Prefecture on the southwestern island of Kyushu. The parts left the factory on the morning of March 24 to be transported about 500 miles northeast by road to Hatsushima Station.
“Normally, construction takes place over several months while the trains are not running every night,” said Kunihiro Handa, a co-founder of Serendix. Construction work near commercial lines is subject to strict restrictions and is usually carried out overnight so as not to disrupt timetables.
Video https://vp.nyt.com/video/2025/03/28/136 ... _1080p.mp4
Workers assembled the station at the site.CreditCredit...
As trucks carrying the 3D-printed parts started pulling in on a Tuesday night in late March, several dozen residents gathered to watch the first-of-its-kind initiative get underway, in a place deeply familiar to them.
Then, after the last train pulled away at 11:57 p.m., workers got busy building the new station.
In less than six hours, the preprinted parts, made of a special mortar, were assembled. They were delivered on separate trucks, and a large crane was used to lift each one down to where workers were piecing them together, just a few feet from the old station.
Image
A workman on a stepladder inside a building being assembled.

It took less than six hours to put the parts together.
Image
Two workers in orange high-visibility jacked and yellow helmets stand in the dark.

Construction started after the night’s last train departed and was finished before the first train arrived in the morning.
The new station, which measures just over 100 square feet, was completed before the first train arrived at 5:45 a.m. It is a minimalistic, white building, featuring designs that include a mandarin orange and a scabbardfish, specialties of Arida.
It still needed interior work, as well as equipment like ticket machines and transportation card readers. West Japan Railway said it expected to open the new building for use in July.
ImageWorkers among scaffolding as parts are lowered from a crane.

The new station won’t go into service until July, West Japan Railway said.
Railway officials say that they hope the station will show how service can be maintained in remote locations with new technology and fewer workers.
“We believe that the significance of this project lies in the fact that the total number of people required will be reduced greatly,” said Ryo Kawamoto, president of JR West Innovations, a venture capital unit of the rail operator.
The wooden building that the new station will replace was completed in 1948. Since 2018, it has been automated, like many smaller stations in Japan.
Toshifumi Norimatsu, 56, who manages the post office a few hundred feet away, had bittersweet feelings about the new building.
“I am a little sad about the old station being taken down,” he said. “But I would be happy if this station could become a pioneer and benefit other stations.”
Image
A train by a platform in a railway station just before dawn.

The first morning train leaving Hatsushima station after the new building was assembled.
https://www.nytimes.com/2025/04/08/worl ... roid-share
As Japan’s population shrinks, maintaining rail service in remote small towns is becoming a challenge. Is this the answer?

By Kiuko NotoyaPhotographs and Video by Noriko Hayashi
Reporting from Arida, Japan
April 8, 2025
Leer en español
In the six hours between the departure of the night’s last train and the arrival of the morning’s first one, workers in rural Japan built an entirely new train station. It will replace a significantly bigger wooden structure that has served commuters in this remote community for over 75 years.
The new station’s components were 3D-printed elsewhere and assembled on site last month, in what the railway’s operators say is a world first. It may look more like a shelter than a station, but building one the traditional way would have taken more than two months and cost twice as much, according to the West Japan Railway Company.
As Japan’s population ages and its work force shrinks, the maintenance of railway infrastructure, including outdated station buildings, is a growing issue for railway operators. Rural stations with dwindling numbers of users have posed a particular challenge.
The new station, Hatsushima, is in a quiet seaside town that’s part of Arida, a 25,000-population city in Wakayama Prefecture, which borders two popular tourist destinations, Osaka and Nara prefectures. The station, served by a single line with trains that run one to three times an hour, serves around 530 riders a day.
Yui Nishino, 19, uses it every day for her commute to university. She said she was surprised when she first heard that the world’s first 3D-printed station building was going to be built here.
“Watching it, the work is progressing at a speed that would be impossible with normal construction,” she said. “I hope that they can make more buildings with 3D-printing technology.”
Video https://vp.nyt.com/video/2025/03/27/136 ... _1080p.mp4
Timelapse video from Serendix, the company that made the station’s parts, shows the seven-day process.CreditCredit...Serendix Inc.
Serendix, the construction firm that worked with West Japan Railway the project, said printing the parts and reinforcing them with concrete took seven days.
The printing was done at a factory in Kumamoto Prefecture on the southwestern island of Kyushu. The parts left the factory on the morning of March 24 to be transported about 500 miles northeast by road to Hatsushima Station.
“Normally, construction takes place over several months while the trains are not running every night,” said Kunihiro Handa, a co-founder of Serendix. Construction work near commercial lines is subject to strict restrictions and is usually carried out overnight so as not to disrupt timetables.
Video https://vp.nyt.com/video/2025/03/28/136 ... _1080p.mp4
Workers assembled the station at the site.CreditCredit...
As trucks carrying the 3D-printed parts started pulling in on a Tuesday night in late March, several dozen residents gathered to watch the first-of-its-kind initiative get underway, in a place deeply familiar to them.
Then, after the last train pulled away at 11:57 p.m., workers got busy building the new station.
In less than six hours, the preprinted parts, made of a special mortar, were assembled. They were delivered on separate trucks, and a large crane was used to lift each one down to where workers were piecing them together, just a few feet from the old station.
Image
A workman on a stepladder inside a building being assembled.

It took less than six hours to put the parts together.
Image
Two workers in orange high-visibility jacked and yellow helmets stand in the dark.

Construction started after the night’s last train departed and was finished before the first train arrived in the morning.
The new station, which measures just over 100 square feet, was completed before the first train arrived at 5:45 a.m. It is a minimalistic, white building, featuring designs that include a mandarin orange and a scabbardfish, specialties of Arida.
It still needed interior work, as well as equipment like ticket machines and transportation card readers. West Japan Railway said it expected to open the new building for use in July.
ImageWorkers among scaffolding as parts are lowered from a crane.

The new station won’t go into service until July, West Japan Railway said.
Railway officials say that they hope the station will show how service can be maintained in remote locations with new technology and fewer workers.
“We believe that the significance of this project lies in the fact that the total number of people required will be reduced greatly,” said Ryo Kawamoto, president of JR West Innovations, a venture capital unit of the rail operator.
The wooden building that the new station will replace was completed in 1948. Since 2018, it has been automated, like many smaller stations in Japan.
Toshifumi Norimatsu, 56, who manages the post office a few hundred feet away, had bittersweet feelings about the new building.
“I am a little sad about the old station being taken down,” he said. “But I would be happy if this station could become a pioneer and benefit other stations.”
Image
A train by a platform in a railway station just before dawn.

The first morning train leaving Hatsushima station after the new building was assembled.
https://www.nytimes.com/2025/04/08/worl ... roid-share
-
- Posts: 242
- Joined: Tue Apr 29, 2025 8:56 pm
Re: TECHNOLOGY AND DEVELOPMENT
Popular Mechanics
4,000 Meters Below Sea Level, Scientists Have Found the Spectacular 'Dark Oxygen'
Darren Orf
Sun, April 6, 2025 at 7:35 AM CDT·
Scattered across an abyssal plain known as the Clarion-Clipperton Zone (CCZ) are polymetallic nodules that are a potato-sized prize for mining companies in search of materials needed for humanity's green energy transition.
A study analyzing these modules reveals that these rocky lumps are capable of producing “dark oxygen” 4,000 meters below sea level where light cannot reach.
While this discovery could upend our understanding of how life started on Earth, the study also complicates negotiations around deep-sea mining regulations as it showcases how little we really know about the ocean’s depths.
Nestled between Hawaii and the western coast of Mexico lies the Pacific Ocean’s Clarion-Clipperton Zone (CCZ), a 4.5 million-kilometer-square area of abyssal plain bordered by the Clarion and Clipperton Fracture Zones. Although this stretch of sea is a vibrant ecosystem filled with marine life, the CCZ is known best for its immense collection of potato-sized rocks known as polymetallic nodules.
These rocks, of which there are potentially trillions, are filled with rich deposits of nickel, manganese, copper, zinc, cobalt. Those particular metals are vital for the batteries needed to power a green energy future, leading some mining companies to refer to nodules as a “battery in a rock.”
However, a study reports that these nodules might be much more than simply a collection of valuable materials for electric cars—they also produce oxygen 4,000 meters below the surface where sunlight can't reach.
This unexpected source of “dark oxygen,” as it’s called, redefines the role these nodules play in the CZZ. The rocks could also rewrite the script on not only how life began on this planet, but also its potential to take hold on other worlds within our Solar System, such as Enceladus or Europa. The results of this study were published in the journal Nature Geoscience.
“For aerobic life to begin on the planet,” Andrew Sweetman, deep-sea ecologist with the Scottish Association for Marine Science and lead author of the study said in a press statement, “there had to be oxygen and our understanding has been that Earth’s oxygen supply began with photosynthetic organisms. But we now know that there is oxygen produced in the deep sea, where there is no light. I think we therefore need to revisit questions like: where could aerobic life have begun?”
The journey toward this discovery began more than a decade ago when Sweetman began analyzing how oxygen levels decreased further into the depths of the ocean. So it came as a surprise in 2013 when sensors returned increased levels of oxygen in the CCZ. At the time, Sweetman dismissed the data as the result of faulty sensors, but future studies showed that this abyssal plain somehow produced oxygen. Taking note of the nodule’s “battery in a rock” tagline, Sweetman wondered if the minerals found in these nodules were somehow acting as a kind of “geobattery” by separating hydrogen and oxygen via seawater electrolysis.
A 2023 study showed that various bacteria and archaea can create “dark oxygen,” so Sweetman and his team recreated the conditions of the CCZ in a laboratory and killed off any microorganisms with mercury chloride—surprisingly, oxygen levels continued rising. According to Scientific American, Sweetman found a voltage of roughly 0.95 volts on the surface of these nodules, likely charging up as they grow with different deposits growing irregularly throughout, and this natural charge is enough to split the seawater.
This discovery adds more fuel to the already-fiery debate over what to do with these nodules. Mining outfits like the Metals Company, the CEO of which coined the phrase “battery in a rock,” sees these nodules as the answer to our energy problems. However, 25 countries want the governing body—the International Seabed Authority (ISA) Council—to implement a moratorium, or at the very least a precautionary pause, so more research can be conducted to see how mining these nodules could affect the ocean. This is especially vital considering that the world's seas are already facing a litany of climate challenges, including acidification, deoxygenation, and pollution.
In response to this discovery, Scripps Institution of Oceanography’s Lisa Levin, who wasn’t involved with the study, highlighted why such a moratorium is so important for protecting these deep-sea nodules in a comment to the Deep Sea Conservation Coalition:
This is an excellent example of what it means to have the deep ocean as a frontier, a relatively unexplored part of our planet. There are still new processes to discover that challenge what we know about life in our ocean. The production of oxygen at the seafloor by polymetallic nodules is a new ecosystem function that needs to be considered when assessing the impact of deep-sea mining. These findings underscore the importance of furthering independent deep-sea scientific research across the global ocean in order to inform deep-ocean policy.
The ISA is still negotiating with key players on deep-sea mining regulations.
So while the future of the world’s oceans is approaching a critical moment of conservation or exploitation, science has proven once again that disrupting these ecosystems could have consequences we can’t even imagine.
https://currently.att.yahoo.com/news/4- ... 00034.html
4,000 Meters Below Sea Level, Scientists Have Found the Spectacular 'Dark Oxygen'
Darren Orf
Sun, April 6, 2025 at 7:35 AM CDT·
Scattered across an abyssal plain known as the Clarion-Clipperton Zone (CCZ) are polymetallic nodules that are a potato-sized prize for mining companies in search of materials needed for humanity's green energy transition.
A study analyzing these modules reveals that these rocky lumps are capable of producing “dark oxygen” 4,000 meters below sea level where light cannot reach.
While this discovery could upend our understanding of how life started on Earth, the study also complicates negotiations around deep-sea mining regulations as it showcases how little we really know about the ocean’s depths.
Nestled between Hawaii and the western coast of Mexico lies the Pacific Ocean’s Clarion-Clipperton Zone (CCZ), a 4.5 million-kilometer-square area of abyssal plain bordered by the Clarion and Clipperton Fracture Zones. Although this stretch of sea is a vibrant ecosystem filled with marine life, the CCZ is known best for its immense collection of potato-sized rocks known as polymetallic nodules.
These rocks, of which there are potentially trillions, are filled with rich deposits of nickel, manganese, copper, zinc, cobalt. Those particular metals are vital for the batteries needed to power a green energy future, leading some mining companies to refer to nodules as a “battery in a rock.”
However, a study reports that these nodules might be much more than simply a collection of valuable materials for electric cars—they also produce oxygen 4,000 meters below the surface where sunlight can't reach.
This unexpected source of “dark oxygen,” as it’s called, redefines the role these nodules play in the CZZ. The rocks could also rewrite the script on not only how life began on this planet, but also its potential to take hold on other worlds within our Solar System, such as Enceladus or Europa. The results of this study were published in the journal Nature Geoscience.
“For aerobic life to begin on the planet,” Andrew Sweetman, deep-sea ecologist with the Scottish Association for Marine Science and lead author of the study said in a press statement, “there had to be oxygen and our understanding has been that Earth’s oxygen supply began with photosynthetic organisms. But we now know that there is oxygen produced in the deep sea, where there is no light. I think we therefore need to revisit questions like: where could aerobic life have begun?”
The journey toward this discovery began more than a decade ago when Sweetman began analyzing how oxygen levels decreased further into the depths of the ocean. So it came as a surprise in 2013 when sensors returned increased levels of oxygen in the CCZ. At the time, Sweetman dismissed the data as the result of faulty sensors, but future studies showed that this abyssal plain somehow produced oxygen. Taking note of the nodule’s “battery in a rock” tagline, Sweetman wondered if the minerals found in these nodules were somehow acting as a kind of “geobattery” by separating hydrogen and oxygen via seawater electrolysis.
A 2023 study showed that various bacteria and archaea can create “dark oxygen,” so Sweetman and his team recreated the conditions of the CCZ in a laboratory and killed off any microorganisms with mercury chloride—surprisingly, oxygen levels continued rising. According to Scientific American, Sweetman found a voltage of roughly 0.95 volts on the surface of these nodules, likely charging up as they grow with different deposits growing irregularly throughout, and this natural charge is enough to split the seawater.
This discovery adds more fuel to the already-fiery debate over what to do with these nodules. Mining outfits like the Metals Company, the CEO of which coined the phrase “battery in a rock,” sees these nodules as the answer to our energy problems. However, 25 countries want the governing body—the International Seabed Authority (ISA) Council—to implement a moratorium, or at the very least a precautionary pause, so more research can be conducted to see how mining these nodules could affect the ocean. This is especially vital considering that the world's seas are already facing a litany of climate challenges, including acidification, deoxygenation, and pollution.
In response to this discovery, Scripps Institution of Oceanography’s Lisa Levin, who wasn’t involved with the study, highlighted why such a moratorium is so important for protecting these deep-sea nodules in a comment to the Deep Sea Conservation Coalition:
This is an excellent example of what it means to have the deep ocean as a frontier, a relatively unexplored part of our planet. There are still new processes to discover that challenge what we know about life in our ocean. The production of oxygen at the seafloor by polymetallic nodules is a new ecosystem function that needs to be considered when assessing the impact of deep-sea mining. These findings underscore the importance of furthering independent deep-sea scientific research across the global ocean in order to inform deep-ocean policy.
The ISA is still negotiating with key players on deep-sea mining regulations.
So while the future of the world’s oceans is approaching a critical moment of conservation or exploitation, science has proven once again that disrupting these ecosystems could have consequences we can’t even imagine.
https://currently.att.yahoo.com/news/4- ... 00034.html
Re: TECHNOLOGY AND DEVELOPMENT
A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse
A new wave of “reasoning” systems from companies like OpenAI is producing incorrect information more often. Even the companies don’t know why.
Video: https://vp.nyt.com/video/2025/04/30/138 ... _1080p.mp4
Last month, an A.I. bot that handles tech support for Cursor, an up-and-coming tool for computer programmers, alerted several customers about a change in company policy. It said they were no longer allowed to use Cursor on more than just one computer.
In angry posts to internet message boards, the customers complained. Some canceled their Cursor accounts. And some got even angrier when they realized what had happened: The A.I. bot had announced a policy change that did not exist.
“We have no such policy. You’re of course free to use Cursor on multiple machines,” the company’s chief executive and co-founder, Michael Truell, wrote in a Reddit post. “Unfortunately, this is an incorrect response from a front-line A.I. support bot.”
More than two years after the arrival of ChatGPT, tech companies, office workers and everyday consumers are using A.I. bots for an increasingly wide array of tasks. But there is still no way of ensuring that these systems produce accurate information.
The newest and most powerful technologies — so-called reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek — are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why.
Today’s A.I. bots are based on complex mathematical systems that learn their skills by analyzing enormous amounts of digital data. They do not — and cannot — decide what is true and what is false. Sometimes, they just make stuff up, a phenomenon some A.I. researchers call hallucinations. On one test, the hallucination rates of newer A.I. systems were as high as 79 percent.
These systems use mathematical probabilities to guess the best response, not a strict set of rules defined by human engineers. So they make a certain number of mistakes. “Despite our best efforts, they will always hallucinate,” said Amr Awadallah, the chief executive of Vectara, a start-up that builds A.I. tools for businesses, and a former Google executive. “That will never go away.”
ImageAmr Awadallah, wearing a blue shirt, looks at a large computer monitor.

Amr Awadallah, the chief executive of Vectara, which builds A.I. tools for businesses, believes A.I. “hallucinations” will persist.Credit...Cayce Clifford for The New York Times
For several years, this phenomenon has raised concerns about the reliability of these systems. Though they are useful in some situations — like writing term papers, summarizing office documents and generating computer code — their mistakes can cause problems.
The A.I. bots tied to search engines like Google and Bing sometimes generate search results that are laughably wrong. If you ask them for a good marathon on the West Coast, they might suggest a race in Philadelphia. If they tell you the number of households in Illinois, they might cite a source that does not include that information.
Those hallucinations may not be a big problem for many people, but it is a serious issue for anyone using the technology with court documents, medical information or sensitive business data.
“You spend a lot of time trying to figure out which responses are factual and which aren’t,” said Pratik Verma, co-founder and chief executive of Okahu, a company that helps businesses navigate the hallucination problem. “Not dealing with these errors properly basically eliminates the value of A.I. systems, which are supposed to automate tasks for you.”
Cursor and Mr. Truell did not respond to requests for comment.
For more than two years, companies like OpenAI and Google steadily improved their A.I. systems and reduced the frequency of these errors. But with the use of new reasoning systems, errors are rising. The latest OpenAI systems hallucinate at a higher rate than the company’s previous system, according to the company’s own tests.
The company found that o3 — its most powerful system — hallucinated 33 percent of the time when running its PersonQA benchmark test, which involves answering questions about public figures. That is more than twice the hallucination rate of OpenAI’s previous reasoning system, called o1. The new o4-mini hallucinated at an even higher rate: 48 percent.
When running another test called SimpleQA, which asks more general questions, the hallucination rates for o3 and o4-mini were 51 percent and 79 percent. The previous system, o1, hallucinated 44 percent of the time.
Image
A hand holds a smartphone open to the ChatGPT chatbot.

Since the arrival of ChatGPT, the phenomenon of hallucination has raised concerns about the reliability of A.I. systems.Credit...Kelsey McClellan for The New York Times
In a paper detailing the tests, OpenAI said more research was needed to understand the cause of these results. Because A.I. systems learn from more data than people can wrap their heads around, technologists struggle to determine why they behave in the ways they do.
“Hallucinations are not inherently more prevalent in reasoning models, though we are actively working to reduce the higher rates of hallucination we saw in o3 and o4-mini,” a company spokeswoman, Gaby Raila, said. “We’ll continue our research on hallucinations across all models to improve accuracy and reliability.”
Hannaneh Hajishirzi, a professor at the University of Washington and a researcher with the Allen Institute for Artificial Intelligence, is part of a team that recently devised a way of tracing a system’s behavior back to the individual pieces of data it was trained on. But because systems learn from so much data — and because they can generate almost anything — this new tool can’t explain everything. “We still don’t know how these models work exactly,” she said.
Tests by independent companies and researchers indicate that hallucination rates are also rising for reasoning models from companies such as Google and DeepSeek.
Since late 2023, Mr. Awadallah’s company, Vectara, has tracked how often chatbots veer from the truth. The company asks these systems to perform a straightforward task that is readily verified: Summarize specific news articles. Even then, chatbots persistently invent information.
Vectara’s original research estimated that in this situation chatbots made up information at least 3 percent of the time and sometimes as much as 27 percent.
In the year and a half since, companies such as OpenAI and Google pushed those numbers down into the 1 or 2 percent range. Others, such as the San Francisco start-up Anthropic, hovered around 4 percent. But hallucination rates on this test have risen with reasoning systems. DeepSeek’s reasoning system, R1, hallucinated 14.3 percent of the time. OpenAI’s o3 climbed to 6.8.
(The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement regarding news content related to A.I. systems. OpenAI and Microsoft have denied those claims.)
For years, companies like OpenAI relied on a simple concept: The more internet data they fed into their A.I. systems, the better those systems would perform. But they used up just about all the English text on the internet, which meant they needed a new way of improving their chatbots.
So these companies are leaning more heavily on a technique that scientists call reinforcement learning. With this process, a system can learn behavior through trial and error. It is working well in certain areas, like math and computer programming. But it is falling short in other areas.
“The way these systems are trained, they will start focusing on one task — and start forgetting about others,” said Laura Perez-Beltrachini, a researcher at the University of Edinburgh who is among a team closely examining the hallucination problem.
Another issue is that reasoning models are designed to spend time “thinking” through complex problems before settling on an answer. As they try to tackle a problem step by step, they run the risk of hallucinating at each step. The errors can compound as they spend more time thinking.
The latest bots reveal each step to users, which means the users may see each error, too. Researchers have also found that in many cases, the steps displayed by a bot are unrelated to the answer it eventually delivers.
“What the system says it is thinking is not necessarily what it is thinking,” said Aryo Pradipta Gema, an A.I. researcher at the University of Edinburgh and a fellow at Anthropic.
https://www.nytimes.com/2025/05/05/tech ... e9677ea768
A new wave of “reasoning” systems from companies like OpenAI is producing incorrect information more often. Even the companies don’t know why.
Video: https://vp.nyt.com/video/2025/04/30/138 ... _1080p.mp4
Last month, an A.I. bot that handles tech support for Cursor, an up-and-coming tool for computer programmers, alerted several customers about a change in company policy. It said they were no longer allowed to use Cursor on more than just one computer.
In angry posts to internet message boards, the customers complained. Some canceled their Cursor accounts. And some got even angrier when they realized what had happened: The A.I. bot had announced a policy change that did not exist.
“We have no such policy. You’re of course free to use Cursor on multiple machines,” the company’s chief executive and co-founder, Michael Truell, wrote in a Reddit post. “Unfortunately, this is an incorrect response from a front-line A.I. support bot.”
More than two years after the arrival of ChatGPT, tech companies, office workers and everyday consumers are using A.I. bots for an increasingly wide array of tasks. But there is still no way of ensuring that these systems produce accurate information.
The newest and most powerful technologies — so-called reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek — are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why.
Today’s A.I. bots are based on complex mathematical systems that learn their skills by analyzing enormous amounts of digital data. They do not — and cannot — decide what is true and what is false. Sometimes, they just make stuff up, a phenomenon some A.I. researchers call hallucinations. On one test, the hallucination rates of newer A.I. systems were as high as 79 percent.
These systems use mathematical probabilities to guess the best response, not a strict set of rules defined by human engineers. So they make a certain number of mistakes. “Despite our best efforts, they will always hallucinate,” said Amr Awadallah, the chief executive of Vectara, a start-up that builds A.I. tools for businesses, and a former Google executive. “That will never go away.”
ImageAmr Awadallah, wearing a blue shirt, looks at a large computer monitor.

Amr Awadallah, the chief executive of Vectara, which builds A.I. tools for businesses, believes A.I. “hallucinations” will persist.Credit...Cayce Clifford for The New York Times
For several years, this phenomenon has raised concerns about the reliability of these systems. Though they are useful in some situations — like writing term papers, summarizing office documents and generating computer code — their mistakes can cause problems.
The A.I. bots tied to search engines like Google and Bing sometimes generate search results that are laughably wrong. If you ask them for a good marathon on the West Coast, they might suggest a race in Philadelphia. If they tell you the number of households in Illinois, they might cite a source that does not include that information.
Those hallucinations may not be a big problem for many people, but it is a serious issue for anyone using the technology with court documents, medical information or sensitive business data.
“You spend a lot of time trying to figure out which responses are factual and which aren’t,” said Pratik Verma, co-founder and chief executive of Okahu, a company that helps businesses navigate the hallucination problem. “Not dealing with these errors properly basically eliminates the value of A.I. systems, which are supposed to automate tasks for you.”
Cursor and Mr. Truell did not respond to requests for comment.
For more than two years, companies like OpenAI and Google steadily improved their A.I. systems and reduced the frequency of these errors. But with the use of new reasoning systems, errors are rising. The latest OpenAI systems hallucinate at a higher rate than the company’s previous system, according to the company’s own tests.
The company found that o3 — its most powerful system — hallucinated 33 percent of the time when running its PersonQA benchmark test, which involves answering questions about public figures. That is more than twice the hallucination rate of OpenAI’s previous reasoning system, called o1. The new o4-mini hallucinated at an even higher rate: 48 percent.
When running another test called SimpleQA, which asks more general questions, the hallucination rates for o3 and o4-mini were 51 percent and 79 percent. The previous system, o1, hallucinated 44 percent of the time.
Image
A hand holds a smartphone open to the ChatGPT chatbot.

Since the arrival of ChatGPT, the phenomenon of hallucination has raised concerns about the reliability of A.I. systems.Credit...Kelsey McClellan for The New York Times
In a paper detailing the tests, OpenAI said more research was needed to understand the cause of these results. Because A.I. systems learn from more data than people can wrap their heads around, technologists struggle to determine why they behave in the ways they do.
“Hallucinations are not inherently more prevalent in reasoning models, though we are actively working to reduce the higher rates of hallucination we saw in o3 and o4-mini,” a company spokeswoman, Gaby Raila, said. “We’ll continue our research on hallucinations across all models to improve accuracy and reliability.”
Hannaneh Hajishirzi, a professor at the University of Washington and a researcher with the Allen Institute for Artificial Intelligence, is part of a team that recently devised a way of tracing a system’s behavior back to the individual pieces of data it was trained on. But because systems learn from so much data — and because they can generate almost anything — this new tool can’t explain everything. “We still don’t know how these models work exactly,” she said.
Tests by independent companies and researchers indicate that hallucination rates are also rising for reasoning models from companies such as Google and DeepSeek.
Since late 2023, Mr. Awadallah’s company, Vectara, has tracked how often chatbots veer from the truth. The company asks these systems to perform a straightforward task that is readily verified: Summarize specific news articles. Even then, chatbots persistently invent information.
Vectara’s original research estimated that in this situation chatbots made up information at least 3 percent of the time and sometimes as much as 27 percent.
In the year and a half since, companies such as OpenAI and Google pushed those numbers down into the 1 or 2 percent range. Others, such as the San Francisco start-up Anthropic, hovered around 4 percent. But hallucination rates on this test have risen with reasoning systems. DeepSeek’s reasoning system, R1, hallucinated 14.3 percent of the time. OpenAI’s o3 climbed to 6.8.
(The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement regarding news content related to A.I. systems. OpenAI and Microsoft have denied those claims.)
For years, companies like OpenAI relied on a simple concept: The more internet data they fed into their A.I. systems, the better those systems would perform. But they used up just about all the English text on the internet, which meant they needed a new way of improving their chatbots.
So these companies are leaning more heavily on a technique that scientists call reinforcement learning. With this process, a system can learn behavior through trial and error. It is working well in certain areas, like math and computer programming. But it is falling short in other areas.
“The way these systems are trained, they will start focusing on one task — and start forgetting about others,” said Laura Perez-Beltrachini, a researcher at the University of Edinburgh who is among a team closely examining the hallucination problem.
Another issue is that reasoning models are designed to spend time “thinking” through complex problems before settling on an answer. As they try to tackle a problem step by step, they run the risk of hallucinating at each step. The errors can compound as they spend more time thinking.
The latest bots reveal each step to users, which means the users may see each error, too. Researchers have also found that in many cases, the steps displayed by a bot are unrelated to the answer it eventually delivers.
“What the system says it is thinking is not necessarily what it is thinking,” said Aryo Pradipta Gema, an A.I. researcher at the University of Edinburgh and a fellow at Anthropic.
https://www.nytimes.com/2025/05/05/tech ... e9677ea768