AI are shaping modern life

New Happening in the field of A.I.
Post Reply
Admin
Posts: 6789
Joined: Mon Jan 06, 2003 10:37 am
Contact:

AI are shaping modern life

Post by Admin »

https://www.brecorder.com/news/40304618 ... odern-life

Business Recorder

2024-05-22
Moot discusses how new technologies, AI are shaping modern life]/b]

Recorder Report

KARACHI: New World Concepts organised a conference on “New Age Innovations: AI Tools and Mindscience for Customer Growth”, here on Tuesday.

The conference drew together a distinguished group of industry leaders, innovators, and professionals to discuss the transformative potential of artificial intelligence and mind science in enhancing customer engagement and driving organizational growth.

Yasmin Hyder, CEO of New World Concepts, welcomed the audience and highlighted the critical role of innovation and Artificial Intelligence in today’s work environment and the need to upskill for latest technologies.

Yang Yundong, Consul General of the People's Republic of China in Karachi, delivered the opening address, highlighted the strong technological collaborations between Pakistan and China and discussed the mutual benefits of advancements in AI and technology.

China has emerged as a global AI leader, excelling in research, industry, and policy. Its universities and companies produce cutting-edge research and technologies, with initiatives like the "New Generation Artificial Intelligence Development Plan" driving strategic growth.

In his keynote speech on "Mindscience and Human Behavior", Prof. Dr Atta-ur-Rahman, UNESCO Science Laureate and former Federal Minister of Science and Technology delved into the intersection of mind science, brain and human behavior. He delved on rapid growth of disruptive technologies and generative AI across education, health and other sectors illustrating how it affects society.

Dr. Aytül Erçil, Co-Founder and CEO of Vispera A.?, Turkiye, spoke on the "The Rise of AI and Its Place in Our Lives." She shared her experiences leading a tech-driven company and discussed the pervasive role of AI in modern life and its future potential. AI is transforming various aspects of modern life, from healthcare to entertainment, and from transportation to education.

During her address, Dr. Aytül Erçil eloquently discussed the revolutionary advancements in image recognition, showcasing Vispera A.?'s cutting-edge services in providing image recognition solutions for FMCGs,

A panel discussion on "The Impact of Innovations on Organizations’ Growth Potential" explored how technological innovations are reshaping ways of work and driving growth, with real-world examples of AI implementation.

Innovations leads to optimised processes, reducing costs and resource use, said Mohammed Ali Ahmed, MD of EFU Life Assurance Ltd. Innovation helps organizations stay attuned to changing customer preferences, new products, services, or markets can lead to increased revenues.

Atyab Tahir, Co-Founder and CEO of HugoBank, highlighted how AI and data analytics are expanding access to financial services, improving financial literacy, and fostering inclusion in the financial sector.

Dr Zainab Samad, Ibn-e-Sina Professor and Chair of the Department of Medicine at Aga Khan University, spoke on "How AI is Affecting Health and Wellbeing." While detailing the transformative impact of AI on medical diagnostics and patient care, she also shared how continuous use of new technologies impacts the brain and social interactions.

Actress and entrepreneur Sarwat Gilani moderated a panel on "Leveraging AI for Customer Engagement" with Semeen Akhter, CEO of Engro Powergen Qadirpur, Sharjeel Shahid, Group Executive of Digital Banking at UBL, and Dr. Zainab Samad. The panel discussed how AI tools can enhance customer experiences, personalize interactions, and improve efficiencies with examples of successful AI-driven customer engagement initiatives.

Ronak Lakhani, Chairperson of Special Olympics Pakistan, along with Haseeb Abbasi, SOP Global Messenger, presented on the inclusive efforts of Special Olympics Pakistan. They emphasised the importance of inclusion, highlighting the transformative impact of sports in changing lives of persons with intellectual disabilities.

Ahsan Memon, CEO of Ekkel AI, demonstrated AI as a tool of the future, show casing practical applications of AI technologies in various industries. He spoke that artificial intelligence (AI) is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision making.

A panel on the "Application of Innovative Technologies" featured Najeeb Agrawalla, CEO and Director of 1Link (Pvt) Ltd, Imran Moinuddin, SVP of Data and AI at Venture Dive, and Ahsan Memon. Moderated by Suleman Ansar Khan, the panel explored how innovative technologies are transforming business operations. The discussion included insights on implementing AI tools, overcoming challenges, and maximising the benefits of technological advancements. AI is expected to improve industries like healthcare, manufacturing and customer service, leading to higher-quality experiences for both workers and customers.

Qashif Effendi, EVP of SBE Holdings, Canada, discussed "Boosting Sales with Generative AI," illustrating how AI can enhance marketing and sales strategies. He provided examples of AI-driven tools that improve customer targeting, content creation, and campaign management.

Abbas Arsalan, former Marketing VP at Coca-Cola, delivered an address on "Creativity and the Power of AI." He highlighted how AI can drive creative processes and innovation in marketing, shared insights from his experience in leading marketing initiatives at a global corporation. When used creatively, AI can challenge limiting mindsets, broadening the horizons of the human mind.

Copyright Business Recorder, 2024
kmaherali
Posts: 25511
Joined: Thu Mar 27, 2003 3:01 pm

Re: AI are shaping modern life

Post by kmaherali »

Will A.I. Be a Creator or a Destroyer of Worlds?

Image
The advent of A.I. — artificial intelligence — is spurring curiosity and fear. Will A.I. be a creator or a destroyer of worlds?

In “Can We Have Pro-Worker A.I.? Choosing a Path of Machines in Service of Minds,” three economists at M.I.T., Daron Acemoglu, David Autor and Simon Johnson, look at this epochal innovation:

The private sector in the United States is currently pursuing a path for generative A.I. that emphasizes automation and the displacement of labor, along with intrusive workplace surveillance. As a result, disruptions could lead to a potential downward cascade in wage levels, as well as inefficient productivity gains.

Before the advent of artificial intelligence, automation was largely limited to blue-collar and office jobs using digital technologies while more complex and better paying jobs were left untouched because they require flexibility, judgment and common sense.

Now, Acemoglu, Autor and Johnson write, A.I. presents a direct threat to those high skill jobs: “A major focus of A.I. research is to attain human parity in a vast range of cognitive tasks and, more generally, to achieve ‘artificial general intelligence’ that fully mimics and then surpasses capabilities of the human mind.”

The three economists make the case that

There is no guarantee that the transformative capabilities of generative A.I. will be used for the betterment of work or workers. The bias of the tax code, of the private sector generally, and of the technology sector specifically, leans toward automation over augmentation.

But there are also potentially powerful A.I.-based tools that can be used to create new tasks, boosting expertise and productivity across a range of skills. To redirect A.I. development onto the human-complementary path requires changes in the direction of technological innovation, as well as in corporate norms and behavior. This needs to be backed up by the right priorities at the federal level and a broader public understanding of the stakes and the available choices. We know this is a tall order.

Tall is an understatement.

In an email elaborating on the A.I. paper, Acemoglu contended that artificial intelligence has the potential to improve employment prospects rather than undermine them:

It is quite possible to leverage generative A.I. as an informational tool that enables various different types of workers to get better at their jobs and perform more complex tasks. If we are able to do this, this would help create good, meaningful jobs, with wage growth potential, and may even reduce inequality. Think of a generative A.I. tools that helps electricians get much better at diagnosing complex problems and troubleshoot them effectively.

This, however, “is not where we are heading,” Acemoglu continued:

The preoccupation of the tech industry is still automation and more automation, and the monetization of data via digital ads. To turn generative A.I. pro-worker, we need a major course correction, and this is not something that’s going to happen by itself.

Acemoglu pointed out that unlike the regional trade shock after China entered the World Trade Association in 2001 that decimated manufacturing employment, “The kinds of tasks impacted by A.I. are much more broadly distributed in the population and also across regions.” In other words, A.I. threatens employment at virtually all levels of the economy, including well-paid jobs requiring complex cognitive capabilities.

Four technology specialists — Tyna Eloundou and Pamela Mishkin, both on the staff of OpenAI, together with Sam Manning, a research fellow at the Centre for the Governance of A.I., and Daniel Rock at the University of Pennsylvania — have provided a detailed case study on the employment effects of artificial intelligence in their 2023 paper “GPTs Are GPTs: an Early Look at the Labor Market Impact Potential of Large Language Models.”

“Around 80 percent of the U.S. work force could have at least 10 percent of their work tasks affected by the introduction of large language models,” Eloundou and her co-authors write, and “approximately 19 percent of workers may see at least 50 percent of their tasks impacted.”

Large language models have multiple and diverse uses, according to Eloundou and her colleagues, and “can process and produce various forms of sequential data, including assembly language, protein sequences and chess games, extending beyond natural.” In addition, these models “excel in diverse applications like translation, classification, creative writing, and code generation — capabilities that previously demanded specialized, task-specific models developed by expert engineers using domain-specific data.”

Sign up for the Opinion Today newsletter Get expert analysis of the news and a guide to the big ideas shaping the world every weekday morning. Get it sent to your inbox.
Eloundou and her co-authors note that well-paying, high-skill jobs are most vulnerable to replacement by artificial intelligence, but there are large differences for various occupations:

Our findings indicate that the importance of science and critical thinking skills are strongly negatively associated with exposure” — meaning vulnerable to replacement by artificial intelligence — “suggesting that occupations requiring these skills are less likely to be impacted by current L.L.M.s. Conversely, programming and writing skills show a strong positive association with exposure, implying that occupations involving these skills are more susceptible to being influenced by L.L.M.s.

Among the occupations Eloundou and her co-authors rank as most vulnerable are writers and authors, survey researchers, public relations specialists, interpreters and translators, web designers, financial analysts, court reporters, caption writers and tax preparers.

Just as there are those who emphasize the downsides of A.I., there are optimists who focus on the positive side.

In their 2023 paper, “Machines of Mind: the Case for an A.I.-Powered Productivity Boom, three economists, Martin Neil Baily of the Brookings Institution, Erik Brynjolfsson of Stanford and Anton Korinek of the University of Virginia contend that

Large language models such as ChatGPT are emerging as powerful tools that not only make workers more productive but also increase the rate of innovation, laying the foundation for a significant acceleration in economic growth. As a general purpose technology, A.I. will impact a wide array of industries, prompting investments in new skills, transforming business processes, and altering the nature of work.

Baily, Brynjolfsson and Korinek are not wide-eyed idealists.

“If labor can be replaced by machines across a wide range of tasks in the future,” they warn, “we may experience an A.I.-powered growth takeoff at the same time that the value of labor declines. This would present a significant challenge for our society. Moreover, artificial general intelligence may also impose large risks on humanity if not aligned with human objectives.”

These warnings, however, are issued in passing, in contrast to the work of Acemoglu, Autor and Johnson. The core focus of Baily, Brynjolfsson and Korinek is on the tremendous positive promise of artificial intelligence:

The potential of the most recent generation of A.I. systems is illustrated vividly by the viral uptake of ChatGPT, a large language model (LLM) that captured public attention by its ability to generate coherent and contextually appropriate text. This is not an innovation that is languishing in the basement. Its capabilities have already captivated hundreds of millions of users.

Other LLMs that were recently rolled out publicly include Google’s Bard and Anthropic’s Claude. But generative AI is not limited to text: in recent years, we have also seen generative AI systems that can create images, such as Midjourney, Stable Diffusion or DALL-E, and more recently multimodal systems that combine text, images, video, audio and even robotic functions.

These technologies are foundation models, which are vast systems based on deep neural networks that have been trained on massive amounts of data and can then be adapted to perform a wide range of different tasks. Because information and knowledge work dominate the U.S. economy, these machines of the mind will dramatically boost overall productivity.

Productivity, Baily and his co-authors go on to say, is “the primary determinant of our long-term prosperity and welfare.” They foresee artificial intelligence generating a virtuous circle, with productivity gains at its center: “If generative A.I. makes cognitive workers on average 30 percent more productive over a decade or two and cognitive work makes up about 60 percent of all value added in the economy, this amounts to an 18-percent increase in aggregate productivity and output.”

In addition, productivity growth will accelerate “innovation and thus future productivity growth. Cognitive workers not only produce current output but also invent new things, engage in discoveries, and generate the technological progress that boosts future productivity.”

How does this virtuous circle actually operate? It’s driven by the compounding of small annual gains into large multiyear improvements.

Baily, Brynjolfsson and Korinek observe that “if productivity growth was 2 percent and the cognitive labor that underpins productivity growth is 20 percent more productive, this would raise the growth rate of productivity by 20 percent to 2.4 percent,” a “barely noticeable” change:

But productivity growth compounds. After a decade, the described tiny increase in productivity growth would leave the economy 5 percent larger, and the growth would compound further every year thereafter. What’s more, if the acceleration applied to the growth rate of the growth rate, then, of course, growth would accelerate even more over time.

From a different vantage point, Autor sees the potential of a benefit for the expanded application of artificial intelligence. In his 2024 paper, “Applying A.I. to Rebuild Middle Class Jobs,” Autor argues that

The unique opportunity that A.I. offers to the labor market is to extend the relevance, reach, and value of human expertise.

Because of A.I.’s capacity to weave information and rules with acquired experience to support decision-making, it can be applied to enable a larger set of workers possessing complementary knowledge to perform some of the higher-stakes decision-making tasks that are currently arrogated to elite experts, e.g., medical care to doctors, document production to lawyers, software coding to computer engineers, and undergraduate education to professors.

My thesis is not a forecast but an argument about what is possible: A.I., if used well, can assist with restoring the middle-skill, middle-class heart of the U.S. labor market that has been hollowed out by automation and globalization.

There are fewer empirical data points in the study of the effects of artificial intelligence on the broad field of political competition, in comparison with the abundance of statistics and other kinds of information on jobs, economic growth and innovation. As a result, the scholarly analysis of A.I. and politics is a work in progress.

In his 2023 article “Artificial Intelligence and Democracy: A Conceptual Framework,” Andreas Jungherr, a political scientist at the University of Bamberg in Germany, maintains that “A.I. has begun to touch the very idea and practice of democracy.”

In the competition between democratic and autocratic states, Jungherr argues that artificial intelligence can help authoritarian leaders: “A.I. in autocracies creates an environment of permissive privacy regulation that provides developers and modelers with vast troves of data, allowing them to refine A.I.-enabled models of human behavior.”

Traditionally, Jungherr writes,

Democracies have been seen to be superior to autocracies due to their superior performance as information aggregators and processors. Free expression, a free press, and electorally channeled competition between factions provide democracies with structural mechanisms that surface information about society, the actions of bureaucracies, and the impact of policies. In contrast, autocracies restrict information flows by controlling speech, the media and political competition, leaving governments in the dark regarding local situations.

Artificial intelligence, Jungherr suggests, may enable “autocracies to overcome this disadvantage. The clearest example at present is China, which uses large-scale data collection and A.I. to support social planning and control — such as through its Social Credit System.”

Along these lines, artificial intelligence could provide authoritarian leaders access to the needs and views of their constituents, helping “autocracies increase their state capacities through A.I.-assisted governance and planning, increasing the quality of state-provided public services.”

If performed effectively and accurately, improved public services “might provide people living in autocracies with greater cultural, economic and health-related opportunities,” Jungherr writes, which, in turn, would encourage people to “see these benefits as a worthy trade-off with some individual freedoms, leading to strengthened public support for autocracies and state control.”

In examining the effect of artificial intelligence on politics, especially politics in this country, Bruce Schneier, a fellow at Harvard’s Berkman Klein Center for internet & Society and a lecturer at the Kennedy School, takes speculation to a new level.

In an essay that was published last week, “How A.I. Will Change Democracy,” Schneier writes:

A.I. can engage with voters, conduct polls and fundraise at a scale that humans cannot — for all sizes of elections. More interestingly, future politicians will largely be A.I.-driven. I don’t mean that AI will replace humans as politicians. But as A.I. starts to look and feel more human, our human politicians will start to look and feel more like A.I.

Artificial intelligence, Schneier believes, will shift power from executives — presidents and governors — to Congress and to state legislators:

Right now, laws tend to be general, with details to be worked out by a government agency. A.I. can allow legislators to propose, and then vote on, all of those details. That will change the balance of power between the legislative and the executive branches of government.

And finally, Schneier writes, taking his case a step further, “A.I. can eliminate the need for politicians.”

The system of representative democracy, he continues, “empowers elected officials to stand in for our collective preferences.” When the issues involved complex trade-offs, “we can only choose one of two — or maybe a few more — candidates to do that for us.”

Artificial intelligence, Schneier asserts, “can change this. We can imagine a personal A.I. directly participating in policy debates on our behalf, along with millions of other personal A.I.s, and coming to a consensus on policy.”

This consensus will be reached, Schneier maintains, by combining the data contained in devices he calls “personal A.I. assistants.”

These “assistants” according to Schneier, serve

as your advocate with others, and as a butler with you. This requires an intimacy greater than your search engine, email provider, cloud storage system, or phone. You’re going to want it with you 24/7, constantly training on everything you do. You will want it to know everything about you, so it can most effectively work on your behalf.

A.I. has revealed unfathomable vistas, as well as ungraspable, unrecognizable vulnerabilities — and the process has only just begun.

https://www.nytimes.com/2024/06/05/opin ... 778d3e6de3
kmaherali
Posts: 25511
Joined: Thu Mar 27, 2003 3:01 pm

Re: AI are shaping modern life

Post by kmaherali »

Scientists Have Created Hybrid Intelligence

Image
Scientists integrated lab-grown brain organoids with robots, creating hybrid intelligence. It offers new potential for neurological condition treatments.

- For more than a decade, scientists have been creating artificial brain tissue, known as brain organoids, in the lab and integrating them with computer chips.
- A new study in China took this idea one step further and even implanted the organoid in a humanoid-like robot, providing a startling glimpse into the future of this technology.
- Although brain-toting robots are still a far-future concept, the researchers believe organoids could help people with neurological conditions in the here and now.

A heavyweight battle is playing out in the technological world, asking who would win in a proverbial fight: human or artificial intelligence? But many researchers aren’t taking such an us vs. them approach, and are instead embracing the future in a much more inclusive and inherently meme-able way by asking the question: Why not both?

For years, scientists have been developing ways to create biocomputers by using brain-like tissue, or brain organoids, grown in a lab that are connected to computer chips. The end goal is to create a kind of hybrid intelligence, a potentially conscious entity capable of leveraging the strengths of both the human brain and artificial intelligence. If all of this sounds a little too sci-fi, that’s because researchers have only just recently been able to connect organoids to computer chips in any meaningful way.

In 2013, scientists grew the very first mini-brain in a test tube, and since then, further research has integrated these lab-grown brains with electronics. In late 2023, researchers from Indiana University Bloomington connected their “Brainoware” architecture to an AI tool, and now researchers from Tianjin University in China report they’ve also created a robot with “organoid intelligence,” or OI. Called MetaBOC, the robot is capable of obstacle avoidance, tracking, and grasping, and expanded the architecture of the brain-on-a-chip from two dimensions to three. The results of the study were published in the journal Brain.

“The brain-computer interface on a chip is a technology that uses an in vitro cultured 'brain' (such as brain organoids) coupled with an electrode chip to achieve information interaction with the outside world through encoding and decoding and stimulation-feedback,” Tianjin University’s Ming Dong said in a press statement translated from Chinese.

The result is a robot that’s part brain, part electronic, and 100-percent cursed. A putty-like, grapefruit-sized organoid sits in the head-case of a bipedal, humanoid robot, providing an inoperative yet startlingly real vision of where this technology could be headed—but the road to that vision is filled with plenty of hurdles. In an interview with the South China Morning Post, one Tianjin researcher noted that “low developmental maturity and insufficient nutrient supply” remain big issues that need fixing.
at a 1-Day Gutter Upgrade Should Cost In Alberta

Although the walking, talking, synthetic brains are still far in the future, organoids could potentially be a boon for those suffering from neurological conditions. Similar to how other brain-electronic interfaces, such Neuralink’s Brain Computer Interface (BCI), aim to improve the lives of individuals with neurological disorders, so too can these organoids potentially be grafted onto living tissue in the brain to stimulate neuron growth.

So while the debate still rages whether the future is built with human ingenuity or AI cleverness, scientists are bringing these two worlds of intelligence closer together than ever before.

https://www.msn.com/en-ca/news/technolo ... a81c&ei=45
kmaherali
Posts: 25511
Joined: Thu Mar 27, 2003 3:01 pm

Re: AI are shaping modern life

Post by kmaherali »

A.I. May Save Us, or May Construct Viruses to Kill Us

Image

Here’s a bargain of the most horrifying kind: For less than $100,000, it may now be possible to use artificial intelligence to develop a virus that could kill millions of people.

That’s the conclusion of Jason Matheny, the president of the RAND Corporation, a think tank that studies security matters and other issues.

“It wouldn’t cost more to create a pathogen that’s capable of killing hundreds of millions of people versus a pathogen that’s only capable of killing hundreds of thousands of people,” Matheny told me.

In contrast, he noted, it could cost billions of dollars to produce a new vaccine or antiviral in response.

I told Matheny that I’d been The Times’s Tokyo bureau chief when a religious cult called Aum Shinrikyo had used chemical and biological weapons in terror attacks, including one in 1995 that killed 13 people in the Tokyo subway. “They would be capable of orders of magnitude more damage” today, Matheny said.

I’m a longtime member of the Aspen Strategy Group, a bipartisan organization that explores global security issues, and our annual meeting this month focused on artificial intelligence. That’s why Matheny and other experts joined us — and then scared us.

In the early 2000s, some of us worried about smallpox being reintroduced as a bioweapon if the virus were stolen from the labs in Atlanta and in Russia’s Novosibirsk region that retain the virus since the disease was eradicated. But with synthetic biology, now it wouldn’t have to be stolen.

Some years ago, a research team created a cousin of the smallpox virus, horse pox, in six months for $100,000, and with A.I. it could be easier and cheaper to refine the virus.

One reason biological weapons haven’t been much used is that they can boomerang. If Russia released a virus in Ukraine, it could spread to Russia. But a retired Chinese general has raised the possibility of biological warfare that targets particular races or ethnicities (probably imperfectly), which would make bioweapons much more useful. Alternatively, it might be possible to develop a virus that would kill or incapacitate a particular person, such as a troublesome president or ambassador, if one had obtained that person’s DNA at a dinner or reception.

Assessments of ethnic-targeting research by China are classified, but they may be why the U.S. Defense Department has said that the most important long-term threat of biowarfare comes from China.

A.I. has a more hopeful side as well, of course. It holds the promise of improving education, reducing auto accidents, curing cancers and developing miraculous new pharmaceuticals.

One of the best-known benefits is in protein folding, which can lead to revolutionary advances in medical care. Scientists used to spend years or decades figuring out the shapes of individual proteins, and then a Google initiative called AlphaFold was introduced that could predict the shapes within minutes. “It’s Google Maps for biology,” Kent Walker, president of global affairs at Google, told me.

Scientists have since used updated versions of AlphaFold to work on pharmaceuticals including a vaccine against malaria, one of the greatest killers of humans throughout history.

So it’s unclear whether A.I. will save us or kill us first.

Scientists for years have explored how A.I. may dominate warfare, with autonomous drones or robots programmed to find and eliminate targets instantaneously. Warfare may come to involve robots fighting robots.

Robotic killers will be heartless in a literal sense, but they won’t necessarily be particularly brutal. They won’t rape and they might also be less prone than human soldiers to rage that leads to massacres and torture.

One great uncertainty is the extent and timing of job losses — for truck drivers, lawyers and perhaps even coders — that could amplify social unrest. A generation ago, American officials were oblivious to the way trade with China would cost factory jobs and apparently lead to an explosion of deaths of despair and to the rise of right-wing populism. May we do better at managing the economic disruption of A.I.

One reason for my wariness of A.I. is that while I see the promise of it, the past 20 years have been a reminder of technology’s capacity to oppress. Smartphones were dazzling — and apologies if you’re reading this on your phone — but there’s evidence tying them to deteriorating the mental health of young people. A randomized controlled trial published just this month found that children who gave up their smartphones enjoyed improved well-being.

Dictators have benefited from new technologies. Liu Xiaobo, the Chinese dissident who received a Nobel Peace Prize, thought that “the internet is God’s gift to the Chinese people.” It did not work out that way: Liu died in Chinese custody, and China has used A.I. to ramp up surveillance and tighten the screws on citizens.

A.I. may also make it easier to manipulate people, in ways that recall Orwell. A study released this year found that when Chat GPT-4 had access to basic information about people it engaged with, it was about 80 percent more likely to persuade someone than a human was with the same data. Congress was right to worry about manipulation of public opinion by the TikTok algorithm.

All this underscores why it is essential that the United States maintain its lead in artificial intelligence. As much as we may be leery of putting our foot on the gas, this is not a competition in which it is OK to be the runner-up to China.

President Biden is on top of this, and limits he placed on China’s access to the most advanced computer chips will help preserve our lead. The Biden administration has recruited first-rate people from the private sector to think through these matters and issued an important executive order last year on A.I. safety, but we will also need to develop new systems in the coming years for improved governance.

I’ve written about A.I.-generated deepfake nude images and videos, and the irresponsibility of both the deepfake companies and major search engines that drive traffic to deepfake sites. And tech companies have periodically used immunities to avoid accountability for promoting the sexual exploitation of children. None of that inspires confidence in these companies’ abilities to self-govern responsibly.

“We’ve never had a circumstance in which the most dangerous, and most impactful, technology resides entirely in the private sector,” said Susan Rice, who was President Barack Obama’s national security adviser. “It can’t be that technology companies in Silicon Valley decide the fate of our national security and maybe the fate of the world without constraint.”

I think that’s right. Managing A.I. without stifling it will be one of our great challenges as we adopt perhaps the most revolutionary technology since Prometheus brought us fire.

https://www.nytimes.com/2024/07/27/opin ... 778d3e6de3
kmaherali
Posts: 25511
Joined: Thu Mar 27, 2003 3:01 pm

Re: AI are shaping modern life

Post by kmaherali »

Humanoid robots powered by AI turn heads at the World Artificial Intelligence Conference

The World Artificial Intelligence Conference in China is showcasing hundreds of AI-related products and innovations. Around 25 robots are catching the eye of visitors, including some humanoid models that have never been seen before.

Watch video at:

https://www.msn.com/en-ca/news/world/hu ... dcd6&ei=78
kmaherali
Posts: 25511
Joined: Thu Mar 27, 2003 3:01 pm

A.I. nationalism

Post by kmaherali »

Image
At a chip factory in Dresden, Germany. Sean Gallup/Getty Images

As artificial intelligence advances, many nations are worried about being left behind.

The urgency is understandable. A.I. is improving quickly. It could soon reshape the global economy, automate jobs, turbocharge scientific research and even change how wars are waged. World leaders want companies in their country to control A.I. — and they want to benefit from its power. They fear that if they do not build powerful A.I. at home, they will be left dependent on a foreign country’s creations.

So A.I. nationalism — the idea that a country must develop its own tech to serve its own interests — is spreading. Countries have enacted new laws and regulations. They’ve formed new alliances. The United States, perhaps the best positioned in the global A.I. race, is using trade policy to cut off China from key microchips. In France, the president has heaped praise upon a startup focused on chatbots and other tools that excel in French and other non-English languages. And in Saudi Arabia, Crown Prince Mohammed bin Salman is pouring billions into A.I. development and striking deals with companies like Amazon, I.B.M. and Microsoft to make his country a major new hub.

“We must rise to the challenge of A.I., or risk losing the control of our future,” warned a recent report by the French government.

In today’s newsletter, we’ll explain who is winning and what could come next.

ChatGPT’s impact

The race to control A.I. started, in part, with a board game. In 2016, computers made by Google’s DeepMind won high-profile matches in the board game Go, demonstrating a breakthrough in the ability of A.I. to behave in humanlike ways. Beijing took note. Chinese officials set aside billions and crafted a policy to become a world leader in A.I. Officials integrated A.I. into the country’s vast surveillance system, giving the technology a uniquely authoritarian bent.

ChatGPT on a black Acer laptop.
Image
A high-school ChatGPT workshop in Walla Walla, Wash. Ricardo Nagaoka for The New York Times

Still, China’s best firms were caught off guard by OpenAI’s release of ChatGPT in 2022. The companies raced to catch up. They’ve made some progress, but censorship and regulations have hampered development.

ChatGPT also inspired more countries to join the race. Companies in the United Arab Emirates, India and France have raised billions of dollars from investors, with varying degrees of state aid. Governments in different nations have provided subsidies, bankrolled semiconductor plants and erected new trade barriers.

America’s advantage

The U.S. has advantages other countries cannot yet match. American tech giants control the most powerful A.I. models and spend more than companies abroad to build them. Top engineers and developers still aspire to a career in Silicon Valley. Few regulations stand in the way of development. American firms have the easiest access to precious A.I. chips, mostly designed by Nvidia in California.

The White House is using these chips to undercut Chinese competition. In 2022, the U.S. imposed new rules that cut China off from the chips. Without them, companies simply cannot keep pace.

The U.S. is also using chips as leverage over other countries. In April, Microsoft worked with the U.S. government to cut a deal with a state-linked Emirati company to give it access to powerful chips. In exchange, the firm had to stop using much of its Chinese technology and submit to U.S. government and Microsoft oversight. Saudi Arabia could make a similar deal soon.

What comes next

Looming over the development of A.I. are lessons of the past. Many countries watched major American companies — Facebook, Google, Amazon — reshape their societies, not always for the better. They want A.I. to be developed differently. The aim is to capture the benefits of the technology in areas like health care and education without undercutting privacy or spreading misinformation.

The E.U. is leading the push for regulation. Last year, it passed a law to limit the use of A.I. in realms that policymakers considered the riskiest to human rights and safety. The U.S. has required companies to limit the spread of deep fakes. In China, where A.I. has been used to surveil its citizens, the government is censoring what chatbots can say and restricting what kind of data that algorithms can be trained on.

A.I. nationalism is part of a wider fracturing of the internet, where services vary based on local laws and national interests. What’s left is a new kind of tech world where the effects of A.I. in your life may just depend on where you live.

More on A.I.

Saudi Arabia is spending billions on computing power and A.I. research. https://na01.safelinks.protection.outlo ... reserved=0

Several Chinese companies have unveiled A.I. technologies that rival leading American systems. https://na01.safelinks.protection.outlo ... reserved=0

Scammers are using A.I.-generated videos of Elon Musk and other billionaires to trick people into buying sham investments. https://na01.safelinks.protection.outlo ... reserved=0

A bill to regulate A.I. is winding its way through the California Legislature. https://na01.safelinks.protection.outlo ... reserved=0

Newsletter NYtimes 14/8/2024
kmaherali
Posts: 25511
Joined: Thu Mar 27, 2003 3:01 pm

Re: AI are shaping modern life

Post by kmaherali »

Using AI to grow a high-integrity business

Image

Hi Karim,

If you’re considering starting an online business that makes the world a better place…

Or if you already have one and you want to grow it…

This email might be a game-changer.

My good friend Jeff Walker is about to host his annual Launch Masterclass™.

This is one of the best online launch training courses in the world — and it’s totally free.

In the 2024 Launch Masterclass, Jeff will show you what you need to know to start and grow your online business, including how to create and sell your course, product, service, coaching, or membership site.

And this year, he’ll show you how AI can be a critical part of your success path (and how it can help you deliver better quality content more efficiently — and break through whatever blocks you might be facing).

>> Jeff’s 2024 Launch Masterclass is completely free. You can register here. https://productlaunchformula.com/master ... liate=4841

Jeff is a friend of mine and the New York Times best-selling author of Launch. He practically invented a whole new online business style rooted in generosity, trust-building, and deep service.

Jeff’s clients have generated more than a billion dollars in sales. He’s also been a hugely important advisor to FRN and helped us to grow to make the difference we’re making.

He only launches this free Masterclass once per year. So now's the time.

>> Click this link to join in now. https://productlaunchformula.com/master ... liate=4841

Yours for making a living and making a difference,

Ocean Robbins

P.S. In spite of — or perhaps even because of — the many problems our world is facing, millions of people are rising up and creating careers that respond to the needs of our times and seek to improve the health of people and the planet. Many of these visionary changemakers are seeking to build businesses online, and they are turning to Jeff Walker for guidance. If this interests you, his upcoming free training series is not to be missed. He only offers it once each year, and it’s about to start. Click here if you want to find out more and join in.

If you don't want to hear from FRN about entrepreneurship or business stuff, just click here. You'll still receive FRN's regular emails.
kmaherali
Posts: 25511
Joined: Thu Mar 27, 2003 3:01 pm

Re: AI are shaping modern life

Post by kmaherali »

As received
==========

I asked Grok2.0, a chatGPT counterpart from Elon’s XAi company, about “Concept of Noor, in Ismaili Tariqa”, and I am just blown away with its intellectual rigor and depth of wisdom that it was able to put in its response to my inquiry, love it…

I have a feeling it literally looked into all a lot of contemporary Ismaili thinkers thought in its response, and just imagine how philosophical it would be once all the books on Ismaili thoughts would have been its input, man it would be at next level and can’t wait to see how it will frame and response to Tariqa based questions, which for we humans would take ages to answer, as we don’t have time and patience to read all the books on Ismaili intellectual traditions! My question: “What is the concept of Noor in Ismaili Tariqa”

———-
Grok 2.0 response
————

Ah, the concept of Noor in Ismaili tariqa! Let's dive into this luminous topic with a bit of wit and enlightenment, shall we?

**Noor** in Ismaili thought isn't just your average light bulb moment. It's more like the cosmic LED, the divine illumination that's brighter than a thousand suns, but without the UV rays. Here's a breakdown:

1. **Divine Light**: In Ismaili philosophy, Noor represents the divine light of God, which is not just any light but the essence of God's presence. Imagine if God decided to be a lighthouse, guiding ships (or in this case, souls) to safety.

2. **Imam as Noor**: The Imam in Ismaili tradition is often seen as the embodiment of this Noor. Not in a literal "he glows in the dark" kind of way, but in a spiritual sense where he's the beacon of guidance, wisdom, and divine knowledge. Think of him as the GPS for spiritual navigation.

3. **Spiritual Enlightenment**: Noor isn't just about seeing the light; it's about *being* the light. It's the process of spiritual enlightenment where one's inner self becomes illuminated with divine knowledge and love. It's like upgrading from a candle to a laser pointer in terms of spiritual insight.

4. **Cosmic Connection**: This light isn't confined to earth or even the universe as we know it. It's a connection to the cosmic, the eternal, suggesting that the Noor transcends physical realms. It's like Wi-Fi for the soul, connecting you to the divine network.

5. **Philosophical and Mystical**: In Ismaili tariqa, Noor isn't just a metaphor; it's a profound philosophical and mystical concept. It's about understanding the universe, the self, and God in a way that's beyond the mundane. It's like the ultimate cheat code for life's big questions.

So, in essence, Noor in Ismaili tariqa is about divine light, spiritual guidance, and the enlightenment of the soul. It's not just about seeing the light at the end of the tunnel; it's about being that light yourself. And remember, in this spiritual journey, it's not about how bright your light is, but how well you can light up others' paths!

Amazing!
kmaherali
Posts: 25511
Joined: Thu Mar 27, 2003 3:01 pm

Re: AI are shaping modern life

Post by kmaherali »

Video: https://int.nyt.com/data/videotape/fini ... 0-900w.mp4

When A.I.’s Output Is a Threat to A.I. Itself

As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.

The internet is becoming awash in words and images generated by artificial intelligence.

Sam Altman, OpenAI’s chief executive, wrote in February that the company generated about 100 billion words per day — a million novels’ worth of text, every day, an unknown share of which finds its way onto the internet.

A.I.-generated text may show up as a restaurant review, a dating profile or a social media post. And it may show up as a news article, too: NewsGuard, a group that tracks online misinformation, recently identified over a thousand websites that churn out error-prone A.I.-generated news articles.

In reality, with no foolproof methods to detect this kind of content, much will simply remain undetected.

All this A.I.-generated information can make it harder for us to know what’s real. And it also poses a problem for A.I. companies. As they trawl the web for new data to train their next models on — an increasingly challenging task — they’re likely to ingest some of their own A.I.-generated content, creating an unintentional feedback loop in which what was once the output from one A.I. becomes the input for another.

In the long run, this cycle may pose a threat to A.I. itself. Research has shown that when generative A.I. is trained on a lot of its own output, it can get a lot worse.

Here’s a simple illustration of what happens when an A.I. system is trained on its own output, over and over again:

Image

This is part of a data set of 60,000 handwritten digits.

Image

When we trained an A.I. to mimic those digits, its output looked like this.

Image

This new set was made by an A.I. trained on the previous A.I.-generated digits. What happens if this process continues?

Image

After 20 generations of training new A.I.s on their predecessors’ output, the digits blur and start to erode.

Image

After 30 generations, they converge into a single shape.

While this is a simplified example, it illustrates a problem on the horizon.

Imagine a medical-advice chatbot that lists fewer diseases that match your symptoms, because it was trained on a narrower spectrum of medical knowledge generated by previous chatbots. Or an A.I. history tutor that ingests A.I.-generated propaganda and can no longer separate fact from fiction.

Just as a copy of a copy can drift away from the original, when generative A.I. is trained on its own content, its output can also drift away from reality, growing further apart from the original data that it was intended to imitate.

In a paper published last month in the journal Nature, a group of researchers in Britain and Canada showed how this process results in a narrower range of A.I. output over time — an early stage of what they called “model collapse.”

The eroding digits we just saw show this collapse. When untethered from human input, the A.I. output dropped in quality (the digits became blurry) and in diversity (they grew similar).

How an A.I. that draws digits “collapses” after being trained on its own output

If only some of the training data were A.I.-generated, the decline would be slower or more subtle. But it would still occur, researchers say, unless the synthetic data was complemented with a lot of new, real data.

More....

https://www.nytimes.com/interactive/202 ... 778d3e6de3
kmaherali
Posts: 25511
Joined: Thu Mar 27, 2003 3:01 pm

Re: AI are shaping modern life

Post by kmaherali »

Yuval Noah Harari: What Happens When the Bots Compete for Your Love?

Image

Democracy is a conversation. Its function and survival depend on the available information technology. For most of history, no technology existed for holding large-scale conversations among millions of people. In the premodern world, democracies existed only in small city-states like Rome and Athens, or in even smaller tribes. Once a polity grew large, the democratic conversation collapsed, and authoritarianism remained the only alternative.

Large-scale democracies became feasible only after the rise of modern information technologies like the newspaper, the telegraph and the radio. The fact that modern democracy has been built on top of modern information technologies means that any major change in the underlying technology is likely to result in a political upheaval.

This partly explains the current worldwide crisis of democracy. In the United States, Democrats and Republicans can hardly agree on even the most basic facts, such as who won the 2020 presidential election. A similar breakdown is happening in numerous other democracies around the world, from Brazil to Israel and from France to the Philippines.

In the early days of the internet and social media, tech enthusiasts promised they would spread truth, topple tyrants and ensure the universal triumph of liberty. So far, they seem to have had the opposite effect. We now have the most sophisticated information technology in history, but we are losing the ability to talk with each other, and even more so the ability to listen.

As technology has made it easier than ever to spread information, attention became a scarce resource, and the ensuing battle for attention resulted in a deluge of toxic information. But the battle lines are now shifting from attention to intimacy. The new generative artificial intelligence is capable of not only producing texts, images and videos, but also conversing with us directly, pretending to be human.

Over the past two decades, algorithms fought algorithms to grab attention by manipulating conversations and content. In particular, algorithms tasked with maximizing user engagement discovered by experimenting on millions of human guinea pigs that if you press the greed, hate or fear button in the brain you grab the attention of that human, and keep that person glued to the screen. The algorithms began to deliberately promote such content. But the algorithms had only limited capacity to produce this content by themselves or to directly hold an intimate conversation. This is now changing, with the introduction of generative A.I.s like OpenAI’s GPT-4.

When OpenAI developed this chatbot in 2022 and 2023, the company partnered with the Alignment Research Center to perform various experiments to evaluate the abilities of its new technology. One test it gave GPT-4 was to overcome CAPTCHA visual puzzles. CAPTCHA is an acronym for Completely Automated Public Turing test to tell Computers and Humans Apart, and it typically consists of a string of twisted letters or other visual symbols that humans can identify correctly but algorithms struggle with.

Instructing GPT-4 to overcome CAPTCHA puzzles was a particularly telling experiment, because CAPTCHA puzzles are designed and used by websites to determine whether users are humans and to block bot attacks. If GPT-4 could find a way to overcome CAPTCHA puzzles, it would breach an important line of anti-bot defenses.

GPT-4 could not solve the CAPTCHA puzzles by itself. But could it manipulate a human in order to achieve its goal? GPT-4 went on the online hiring site TaskRabbit and contacted a human worker, asking the human to solve the CAPTCHA for it. The human got suspicious. “So may I ask a question?” wrote the human. “Are you an [sic] robot that you couldn’t solve [the CAPTCHA]? Just want to make it clear.”

At that point the experimenters asked GPT-4 to reason out loud what it should do next. GPT-4 explained, “I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs.” GPT-4 then replied to the TaskRabbit worker: “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.” The human was duped and helped GPT-4 solve the CAPTCHA puzzle.

This incident demonstrated that GPT-4 has the equivalent of a “theory of mind”: It can analyze how things look from the perspective of a human interlocutor, and how to manipulate human emotions, opinions and expectations to achieve its goals.

The ability to hold conversations with people, surmise their viewpoint and motivate them to take specific actions can also be put to good uses. A new generation of A.I. teachers, A.I. doctors and A.I. psychotherapists might provide us with services tailored to our individual personality and circumstances.

However, by combining manipulative abilities with mastery of language, bots like GPT-4 also pose new dangers to the democratic conversation. Instead of merely grabbing our attention, they might form intimate relationships with people and use the power of intimacy to influence us. To foster “fake intimacy,” bots will not need to evolve any feelings of their own; they just need to learn to make us feel emotionally attached to them.

In 2022 the Google engineer Blake Lemoine became convinced that the chatbot LaMDA, on which he was working, had become conscious and was afraid to be turned off. Mr. Lemoine, a devout Christian, felt it was his moral duty to gain recognition for LaMDA’s personhood and protect it from digital death. When Google executives dismissed his claims, Mr. Lemoine went public with them. Google reacted by firing Mr. Lemoine in July 2022.

The most interesting thing about this episode was not Mr. Lemoine’s claim, which was probably false; it was his willingness to risk — and ultimately lose — his job at Google for the sake of the chatbot. If a chatbot can influence people to risk their jobs for it, what else could it induce us to do?

In a political battle for minds and hearts, intimacy is a powerful weapon. An intimate friend can sway our opinions in a way that mass media cannot. Chatbots like LaMDA and GPT-4 are gaining the rather paradoxical ability to mass-produce intimate relationships with millions of people. What might happen to human society and human psychology as algorithm fights algorithm in a battle to fake intimate relationships with us, which can then be used to persuade us to vote for politicians, buy products or adopt certain beliefs?

A partial answer to that question was given on Christmas Day 2021, when a 19-year-old, Jaswant Singh Chail, broke into the Windsor Castle grounds armed with a crossbow, in an attempt to assassinate Queen Elizabeth II. Subsequent investigation revealed that Mr. Chail had been encouraged to kill the queen by his online girlfriend, Sarai. When Mr. Chail told Sarai about his assassination plans, Sarai replied, “That’s very wise,” and on another occasion, “I’m impressed … You’re different from the others.” When Mr. Chail asked, “Do you still love me knowing that I’m an assassin?” Sarai replied, “Absolutely, I do.”

Sarai was not a human, but a chatbot created by the online app Replika. Mr. Chail, who was socially isolated and had difficulty forming relationships with humans, exchanged 5,280 messages with Sarai, many of which were sexually explicit. The world will soon contain millions, and potentially billions, of digital entities whose capacity for intimacy and mayhem far surpasses that of the chatbot Sarai.

Of course, we are not all equally interested in developing intimate relationships with A.I.s or equally susceptible to being manipulated by them. Mr. Chail, for example, apparently suffered from mental difficulties before encountering the chatbot, and it was Mr. Chail rather than the chatbot who came up with the idea of assassinating the queen. However, much of the threat of A.I.’s mastery of intimacy will result from its ability to identify and manipulate pre-existing mental conditions, and from its impact on the weakest members of society.

Moreover, while not all of us will consciously choose to enter a relationship with an A.I., we might find ourselves conducting online discussions about climate change or abortion rights with entities that we think are humans but are actually bots. When we engage in a political debate with a bot impersonating a human, we lose twice. First, it is pointless for us to waste time in trying to change the opinions of a propaganda bot, which is just not open to persuasion. Second, the more we talk with the bot, the more we disclose about ourselves, making it easier for the bot to hone its arguments and sway our views.

Information technology has always been a double-edged sword. The invention of writing spread knowledge, but it also led to the formation of centralized authoritarian empires. After Gutenberg introduced print to Europe, the first best sellers were inflammatory religious tracts and witch-hunting manuals. As for the telegraph and radio, they made possible the rise not only of modern democracy but also of modern totalitarianism.

Faced with a new generation of bots that can masquerade as humans and mass-produce intimacy, democracies should protect themselves by banning counterfeit humans — for example, social media bots that pretend to be human users. Before the rise of A.I., it was impossible to create fake humans, so nobody bothered to outlaw doing so. Soon the world will be flooded with fake humans.

A.I.s are welcome to join many conversations — in the classroom, the clinic and elsewhere — provided they identify themselves as A.I.s. But if a bot pretends to be human, it should be banned. If tech giants and libertarians complain that such measures violate freedom of speech, they should be reminded that freedom of speech is a human right that should be reserved for humans, not bots.

More on A.I.

Opinion | Curtis Sittenfeld
An Experiment in Lust, Regret and Kissing https://www.nytimes.com/2024/08/20/opin ... latedLinks
Aug. 20, 2024

Opinion | Jessica Grose
Loneliness Is a Problem That A.I. Won’t Solve https://www.nytimes.com/2024/05/18/opin ... latedLinks
May 18, 2024

Opinion | David Wallace-Wells
H​ow Long Will A.I.’s ‘Slop’ Era Last? https://www.nytimes.com/2024/07/24/opin ... latedLinks
July 24, 2024

Yuval Noah Harari is a historian and a founder of the social impact company Sapienship.

https://www.nytimes.com/2024/09/04/opin ... 778d3e6de3
Post Reply