AI Related Issues

Current issues, news and ethics
Post Reply
kmaherali
Posts: 23164
Joined: Thu Mar 27, 2003 3:01 pm

AI Related Issues

Post by kmaherali »

Godfather Of AI Warns Technology Could Invent Its Own Language: 'It Gets Scary...'

Geoffrey Hinton, the 'godfather of AI', said that AI has already demonstrated that it can think terrible thoughts.

Image
AI could develop its own language that humans may not understand.

Godfather Of AI Warns Technology Could Invent Its Own Language: 'It Gets Scary...'Geoffrey Hinton, the 'godfather of AI', said that AI has already demonstrated that it can think terrible thoughts.

Godfather Of AI Warns Technology Could Invent Its Own Language: 'It Gets Scary...'
AI could develop its own language that humans may not understand.

Geoffrey Hinton, regarded by many as the 'godfather of artificial intelligence' (AI), has warned that the technology could get out of hand if chatbots manage to develop their language. Currently, AI does its thinking in English, allowing developers to track what the technology is thinking, but there could come a point where humans might not understand what AI is planning to do, as per Mr Hinton.

"Now it gets more scary if they develop their own internal languages for talking to each other," he said on an episode of the "One Decision" podcast that aired last month.

"I wouldn't be surprised if they developed their own language for thinking, and we have no idea what they're thinking."

Mr Hinton added that AI has already demonstrated that it can think terrible thoughts, and it is not unthinkable that the machines could eventually think in ways that humans cannot track or interpret.

Warning about AI

Mr Hinton laid the foundations for machine learning that is powering today's AI-based products and applications. However, the Nobel laureate grew wary of AI's future development and cut ties with his employer, Google, in order to speak more freely on the issue.

"It will be comparable with the industrial revolution. But instead of exceeding people in physical strength, it's going to exceed people in intellectual ability. We have no experience of what it's like to have things smarter than us," said Mr Hinton at the time.

"I am worried that the overall consequence of this might be systems more intelligent than us that eventually take control."

Mr Hinton has been a big advocate of government regulation for the technology, especially given the unprecedented pace of development. His warning also comes in the backdrop of repeated instances of AI chatbots hallucinating thoughts.

In April, OpenAI's internal tests revealed that its o3 and o4-mini AI models were hallucinating or making things up much more frequently than even the non-reasoning models, such as GPT-4o. The company said it did not have any idea why this was happening.

In a technical report, OpenAI said, "more research is needed" to understand why hallucinations are getting worse as it scales up its reasoning models.

https://www.ndtv.com/offbeat/godfather- ... ry-9012092
kmaherali
Posts: 23164
Joined: Thu Mar 27, 2003 3:01 pm

Re: AI Related Issues

Post by kmaherali »

ON-DEMAND TRAINING

Generative AI Fundamentals


Build foundational knowledge of generative AI, including large language models (LLMs), with 4 short videos

Generative AI, such as ChatGPT and Dolly, has undoubtedly changed the technology landscape and unlocked transformational use cases, such as creating original content, generating code and expediting customer service. And the technology's applications are growing daily. Organizations that harness this transformative technology successfully will be differentiated in the market and be leaders in the future. Get up to speed on generative AI with this free on-demand training.

Here is how it works:

- Watch 4 short tutorial videos
- Pass the knowledge test
- Earn a badge for Generative AI Fundamentals you can share on your LinkedIn profile or résumé
- Videos included in this training:

Welcome and Introduction to the Course
LLM Applications
Finding Success With Generative AI
Assessing Potential Risks and Challenges
Earn your badge today and share your accomplishment on LinkedIn or résumé.

https://www.databricks.com/resources/le ... fAQAvD_BwE
kmaherali
Posts: 23164
Joined: Thu Mar 27, 2003 3:01 pm

Re: AI Related Issues

Post by kmaherali »

Release of ChatGPT-5 'Beginning of a New Era For Humanity'

Image
(tungnguyen0905/pixabay/Canva)

OpenAI released a keenly awaited new generation of its hallmark ChatGPT on Thursday, touting "significant" advancements in artificial intelligence capabilities as a global race over the technology accelerates.

ChatGPT-5 is rolling out free to all users of the AI tool, which is used by nearly 700 million people weekly, OpenAI said in a briefing with journalists.

Co-founder and chief executive Sam Altman touted this latest iteration as "clearly a model that is generally intelligent."

//Related: ChatGPT: 5 Surprising Truths About How AI Chatbots Actually Work https://www.sciencealert.com/chatgpt-5- ... ually-work

Altman cautioned that there is still work to be done to achieve the kind of artificial general intelligence (AGI) that thinks the way people do.

"This is not a model that continuously learns as it is deployed from new things it finds, which is something that, to me, feels like it should be part of an AGI," Altman said.

"But the level of capability here is a huge improvement."

Industry analysts have heralded the arrival of an AI era in which genius computers transform how humans work and play.

"As the pace of AI progress accelerates, developing superintelligence is coming into sight," Meta chief executive Mark Zuckerberg wrote in a recent memo.

"I believe this will be the beginning of a new era for humanity."

Altman said there were "orders of magnitude more gains" to come on the path toward AGI.

"Obviously… you have to invest in compute (power) at an eye-watering rate to get that, but we intend to keep doing it."

Tech industry rivals Amazon, Google, Meta, Microsoft and Elon Musk's xAI have been pouring billions of dollars into artificial intelligence since the blockbuster launch of the first version of ChatGPT in late 2022.

Chinese startup DeepSeek shook up the AI sector early this year with a model that delivers high performance using less costly chips.

'PhD-level expert'

With fierce competition around the world over the technology, Altman said ChatGPT-5 led the pack in coding, writing, health care and much more.

"GPT-3 felt to me like talking to a high school student – ask a question, maybe you get a right answer, maybe you'll get something crazy," Altman said.

"GPT-4 felt like you're talking to a college student; GPT-5 is the first time that it really feels like talking to a PhD-level expert in any topic."

Altman expects the ability to create software programs on demand – so-called "vibe-coding" – to be a "defining part of the new ChatGPT-5 era."

jumping ball game
Image
'Vibe coding' allowed ChatGPT-5 to deliver a simple jumping ball game with a single prompt. (OpenAI)

In a blog post, British AI expert Simon Willison wrote about getting early access to ChatGPT-5.

"My verdict: it's just good at stuff," Willison wrote.

"It doesn't feel like a dramatic leap ahead from other (large language models) but it exudes competence – it rarely messes up, and frequently impresses me."

However Musk wrote on X, formerly Twitter, that his Grok 4 Heavy AI model "was smarter" than ChatGPT-5.

Honest AI?

ChatGPT-5 was trained to be trustworthy and stick to providing answers as helpful as possible without aiding seemingly harmful missions, according to OpenAI safety research lead Alex Beutel.

"We built evaluations to measure the prevalence of deception and trained the model to be honest," Beutel said.

ChatGPT-5 is trained to generate "safe completions," sticking to high-level information that can't be used to cause harm, according to Beutel.

The company this week also released two new AI models that can be downloaded for free and altered by users, to challenge similar offerings by rivals.

The release of "open-weight language models" comes as OpenAI is under pressure to share the inner workings of its software in the spirit of its origin as a nonprofit.

© Agence France-Presse https://www.sciencealert.com/release-of ... r-humanity

********
OpenAI Aims to Stay Ahead of Rivals With New GPT-5 Technology

The A.I. start-up said its new flagship technology was faster, more accurate and less likely to make stuff up.

Image
The chief executive of OpenAI, Sam Altman, said GPT-5 was a signification upgrade from the last version of the company’s core technology.Credit...Yuichi Yamazaki/Agence France-Presse — Getty Images

Cade Metz
By Cade Metz
Reporting from San Francisco


ChatGPT is getting another upgrade.

On Thursday, OpenAI unveiled a new flagship A.I. model, GPT-5, and began sharing the technology with the hundreds of millions of people who use ChatGPT, the company’s online chatbot.

During a briefing with journalists, OpenAI executives called GPT-5 a “major upgrade” over the systems that previously powered ChatGPT, saying the new technology was faster, more accurate and less likely to “hallucinate,” or make stuff up.

“It feels significantly better in obvious ways and in subtle ways,” OpenAI’s chief executive, Sam Altman, said. “GPT-5 is the first time that it feels like talking to an expert in any topic — a Ph.D.-level expert.”

Since launching the A.I. boom in late 2022 with the release of ChatGPT, OpenAI has consistently improved the technology that underpins its chatbot. This began with the release of the company’s GPT-4 technology in the spring of 2023 and continued through a series of A.I. models that could listen, look and talk and approximate the way people reason through complex problems.

OpenAI’s many rivals, including Google, Meta, the start-up Anthropic and China’s DeepSeek, have released similar technologies.

This is the first time that OpenAI has used a so-called reasoning model to power the free version of ChatGPT. Unlike the previous technologies, a reasoning model can spend time “thinking” through complex problems before settling on an answer.

“For most people on ChatGPT, this is their first introduction to reasoning,” said Nick Turley, the OpenAI vice president who oversees ChatGPT. “It just knows when to ‘think.’”

(The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement of news content related to A.I. systems. OpenAI and Microsoft have denied those claims.)

OpenAI said that the technology “feels more human” than previous models and that it allowed even novices to build simple software apps from short text prompts. One OpenAI engineer asked the system to generate an online app that could help people learn French, and it created this app in minutes.

Mr. Altman called the system a “significant step” along the path to the ultimate goal of the company and its rivals: artificial general intelligence, or A.G.I., a machine that can do anything the human brain can do. But he also acknowledged that it lacked many of the key ingredients needed to build such a machine.

Many experts say there is no clear path to developing A.G.I.

Earlier this week, OpenAI said it was “open sourcing” two other A.I. models that can power online chatbots, freely sharing the technology with researchers and business across the globe. Since unveiling ChatGPT three years ago, the company has mostly kept its technology under wraps. If people use these open-source models, OpenAI hopes they will also pay for its more powerful products.

In addition to offering a free chatbot via the internet, OpenAI sells access to a more powerful chatbot for $20 a month and sells a wide range of A.I. technologies to businesses and independent software developers.

The company is not yet profitable. It plans to raise $40 billion this year and is on a pace to pull in revenues of $20 billion by year’s end.


OpenAI and ChatGPT

OpenAI to Give Away Some of the Technology That Powers ChatGPT https://www.nytimes.com/2025/08/05/tech ... atgpt.html
Aug. 5, 2025

OpenAI Unveils New ChatGPT That Can Reason Through Math and Science https://www.nytimes.com/2024/09/12/tech ... -math.html
Sept. 12, 2024

OpenAI Unveils New ChatGPT That Listens, Looks and Talks https://www.nytimes.com/2024/05/13/tech ... t-app.html
May 13, 2024

https://www.nytimes.com/2025/08/07/tech ... e9677ea768

*******************
kmaherali
Posts: 23164
Joined: Thu Mar 27, 2003 3:01 pm

Re: AI Related Issues

Post by kmaherali »

21 Ways People Are Using A.I. at Work

“I can give it tasks and just walk away.”

“It captures details I would have otherwise forgotten.”

“There’s so much low-hanging fruit.”

“The important thing is to maintain a reserve of skepticism.”

Upshot logo
21 Ways People Are Using A.I. at Work
By Larry Buchanan and Francesca Paris Aug. 11, 2025

A burst of experimentation followed ChatGPT's release to the public in late 2022. Now many people are integrating the newest models and custom systems into what they do all day: their work.

Chefs are using A.I. to invent recipes; doctors are using it to read M.R.I. and CT scans; scientists are unlocking discoveries. It’s helping workers with their day-to-day tasks: writing code, summarizing emails, creating ideas, generating curricula — even as it still makes plenty of mistakes.
Recent surveys have found that almost one in five U.S. workers say they use it at least semi-regularly for work. Twenty-one people told us how.
People are using A.I. to …

1. Select wines for restaurant menus
Sam McNulty

Restaurant owner and operator

Mr. McNulty, who owns restaurants, brewpubs and dance clubs in Cleveland, uses ChatGPT to analyze sales reports and brainstorm how to grow sales. He’s also used it to help pick wines. He sent a “voluminous” wine portfolio from a distributor to the chatbot and gave it some instructions — specific pricing and particular regions among them — and got back a list, including:

Herdade do Esporão Monte Velho Branco

Region: Vinho Regional Alentejano, Portugal

Grapes: Antão Vaz, Roupeiro & Perrum

Wholesale Est.: $7-9 per 750 ml

Why It’s Great: A crowd‐pleasing white that combines citrus, stone fruit and saline notes with bright acidity — an ideal food‐friendly pour for small plates or seafood.

“The results were astonishingly good and saved me and my team countless hours of meetings with wine reps, tastings and debate,” he said. “The only part of the wine-program building process I missed was the tastings ... so far the A.I. can't recreate the joy of taking that sip.”

2. Digitize a herbarium

Jordan Teisher

Curator and director

There are eight million dried plant specimens at the Missouri Botanical Garden herbarium in St. Louis. Now A.I. is helping identify them.
Experienced taxonomists can quickly recognize most specimens, said Mr. Teisher, but that requires years of training.

So the garden is building an A.I. model using spectral data — the pattern of light reflected by the plant. Leaves from many different kinds of plants are scanned, labeled and put into the model as training data. Then new plants can go through the same process, and the model will identify them. If the model is quite certain that the spectral data look the same as they do for other plants, it’ll say so. If not, the plant can go to an expert.

Image
Leaves are placed on a black plate to measure their “reflectance spectra,” part of building an A.I. model that can identify new specimens. Nathan Kwarta, Missouri Botanical Garden

“We can cut down on the time expert taxonomists are spending on common species,” Mr. Teisher said. “Rather than them getting five boxes of plants that come in, they can get a small box that says, ‘Here are the ones the model isn’t sure about.’”

Those could be species so rare or infrequently seen in the herbarium that the model simply couldn’t match it, or a new species entirely.
And this kind of project is only possible, the garden staff said, because of advances in cheap computing power. The GPUs necessary to train A.I. models quickly are easier to get than before. And the garden has enough funding to process several hundred thousand specimens. Identified specimens can be used for a variety of research, including on biodiversity and climate change.

“We want these data to be compatible with other institutions, so we’re collaborating closely,” Mr. Teisher said. “We have the money to do a big chunk of our herbarium, but ultimately we want this to be a tool usable by everyone.”

3. Make everything look better

Dan Frazier

Designer and small business owner

Mr. Frazier designs and sells things like bumper stickers and magnetic signs. To help with the graphic design, he uses Adobe Photoshop’s Generative Fill, a two-year-old A.I. feature that adjusts images automatically.

“If I take a picture of a product, and don't like the glare or reflection I see on some shiny surface, I can use generative fill to ‘imagine’ that part of the photo, and usually one of the resulting images will be acceptable to me,” he said. “Or if I want to use a head shot of a politician on a bumper sticker, and I want to show a little more of the coat or shirt than appeared in the photo I am using, I can use generative fill to imagine that additional clothing.”

A problem that might have taken 20 minutes to address now takes 20 seconds, he estimated.
In one recent case, he wanted to post an image of a bike helmet he’d built himself, but he didn’t think he was the best model for it. So he used Photoshop’s A.I. to generate a woman’s face.

Image
Dan Frazier

But A.I. can only fill in the gaps for Mr. Frazier. “I have found generative fill to be less useful at creating images from scratch,” he said. “I once wanted to create an image of Joe Biden looking like one of the founding fathers, maybe like George Washington. But I was not happy with any of the results I was getting. I ended up melding a photo of Biden with a lifelike painting of Washington using traditional Photoshop techniques.”

4.Create lesson plans that meet educational standards

Manuel Soto

E.S.L. teacher

Mr. Soto, an E.S.L. teacher in Puerto Rico, said the administrative part of his job can be time consuming: writing lesson plans, following curriculum sent forth by the Puerto Rico Department of Education, making sure it all aligns with standards and expectations. Prompts like this to ChatGPT help cut his prep time in half:

Create a 5 day lesson plan based on unit 9.1 based off Puerto Rico Core standards. Include lesson objectives, standards and expectations for each day. I need an opening, development with differentiated instruction, closing and exit ticket.

After integrating the A.I. results, his detailed lesson plans for the week looked like this:

Image
English as a Second Language sample lesson plan. Manuel Soto

But he’s noticing more students using A.I. and not relying “on their inner voice.”
Instead of fighting it, he’s planning to incorporate A.I. into his curriculum next year. “So they realize it can be used practically with fundamental reading and writing skills they should possess,” he said.

5.Make a bibliography

Karen de Bruin

Professor of French and scholar of 18th-century French literature

Anyone who has ever assembled a “works cited” section knows the dizzying array of styles, formats and specific punctuation rules required for a bibliography. What’s the Chicago Style rule on how to cite books? Do you use quotation marks or underline? Or is it italic? What about in A.P.A. style? A.I. has freed Ms. de Bruin from the most annoying parts of the task. “No more consulting handbooks, guidebooks, cheat sheets, Purdue Owl, fretting about the right punctuation, whether guidelines have changed, and how to cite a three-volume work written in the 18th century, translated by God knows who, edited by Jesus only knows, and originally published where?”
All of this has been replaced, in her words, by “peace, serenity and Claude” (the large language model).
She uses prompts like:
Please cite in MLA format the book University Finances by Dean O. Smith.
or
Give the mla citation for this article: www.chronicle.com/article/higher-eds-fi ... er-coaster

Occasionally Claude cites “Doe, Jane,” and Ms. de Bruin challenges the answer.
“Then, only then, does it respond that it took its best guess at the author because the article was behind a paywall,” she said.

6. Write up therapy plans

Alissa Swank

Psychotherapist

Ms. Swank uses A.I. to take unstructured notes from a visit and turn them into SOAP notes, which is a structured documentation format for health care providers. (It stands for Subjective, Objective, Assessment and Plan — a way of summarizing the visit and the next steps.) It saves her a couple of hours each week, she estimates, “but more so it helps me complete the task that is so easy to put off.”

7. As a ‘muse’

Marya Triandafellos

Visual artist

Ms. Triandafellos uses A.I. as inspiration for her art practice. She uploads dozens of images of her artwork to get the A.I. model to understand her style, then guides the model with prompts to generate new works based on her style. What she gets back are hundreds of abstract images in a grid:

Image
Marya Triandafellos

She studies them the way a psychiatric patient interprets an inkblot test.
“I looked at each image and wondered what it reminded me of, reaching my subconscious,” she said.
From there, she sorts them into themes and uses them as a base for a more fully finished work. She also asks the model to be her critic:
Please act as an art critic and evaluate this piece based on its subject, themes, how it makes you feel, and historic connections. Consider how it may be connected to science or math. Then, provide me with an appropriate title.
“It may not be as nuanced as a human art critic,” she said, “but it does decipher key aspects of the work which I refine further.”
She doesn’t use A.I. to create final pieces, though: “I tried — and was bored and frustrated.”

8. Detect leaks in a water system

Tim J. Sutherns

Company president

When a water system springs a leak, you might not notice until it becomes a big problem. Mr. Sutherns’s company, Digital Water Solutions, is trying to catch leaks early by placing small sensors inside fire hydrants that record the noise water makes as it flows through the pipes. That data is fed to a machine learning model that looks for certain patterns suggesting a leak.
It’s a relatively simple concept, but it’s hard to reproduce quickly, said Mr. Sutherns, in large part because every system is different: different pipe material, sizes and pressures.
“If we had to build individual machine learning models for every one of these unique systems, it would take us months, a whole bunch of data scientists,” he said.
Instead, the team uses “autonomous machine learning.” A.I. figures out, on the fly, what the parameters of the model should be for a specific system, meaning the company doesn’t need to know anything about the system ahead of time — it just has to start collecting data. Within a couple of weeks, typically, the models can provide some information on possible existing leaks.

Image
Digital Water Solutions

Mr. Sutherns started the company in 2018, but recent advancements in machine learning, cheaper computing power and data storage have made the business far more feasible.
Small water systems, serving fewer than 10,000 people, make up the vast majority of water systems in the U.S., and have small budgets. Offering the technology to those systems at a reasonable price? That wouldn’t have been possible a few years ago, he said.

9. Just write code

Chris O’Sullivan

Chief technology officer and company co-founder

It’s one of A.I.’s simplest and most common use cases — one that even the A.I. engineers are leaning on: writing code. Mr. O’Sullivan is one of them: As the C.T.O. of DraftPilot, a legal A.I. company that helps lawyers with contract review, he frequently uses Anthropic’s Claude Code.
“I can give it tasks and just walk away,” he said. “It writes the code itself.”

Image
Chris O’Sullivan

10. Type up medical notes

Matteo Valenti

Primary care physician

At Dr. Valenti’s hospital, an A.I. tool, Abridge, is built into the electronic medical record system to take notes when he meets with patients. The tool listens to his conversation with the patient, then creates an organized record of the visit — the kind he would otherwise have to produce manually.

Image
Abridge

It saves him about an hour each day, he estimates, “but the real benefit is that it captures details I would have otherwise forgotten.” If a patient comes in for diabetes, but briefly mentions back pain, that aside makes it into the record whether or not he remembers it. And he’s able to focus on having a real conversation with patients, without transcribing every word.
He worries that the tool may replace human scribes. But for providers on tight budgets, it makes a difference. “For those of us in primary care who are drowning in paperwork,” he said, “this will be a plus.”

11. Run experiments to figure out how the brain encodes language

Adam Morgan

Postdoctoral fellow

For his research in cognitive neuroscience, Mr. Morgan works with neurosurgery patients. While their brains are exposed, he runs experiments that attempt to examine how the brain encodes things like language and meaning — often by asking them questions while directly measuring their neural activity.
Because there’s usually limited time and subjects on whom he can run experiments, he has to prioritize research topics. That’s where A.I. comes in.
Like a human brain, artificial neural networks take some kind of input (words, say) and produce outputs (other words). For the human brain, what happens in the middle is something of a black box, but we know that words we hear are translated into neural activity that represents meaning, then decoded into other words. Mr. Morgan says artificial neural networks do something similar, only using numbers.
“There’s good, and growing, evidence that L.L.M.s encode syntax and words in a similar way as the brain,” Mr. Morgan said.
But unlike with a brain, you can directly examine these encoding processes in a large language model just by looking at the code. So the A.I. can act as a pseudo brain to test hypotheses about language that are hard to test in real brains.
“In my work, I figure that if I find that the middle layers of a computer model are sensitive to a particular property that I'm interested in in the brain, it's a decent indication that the brain might care about that,” he said.

12. Help get pets adopted

Kristen Hassen

C.E.O.

Ms. Hassen’s company, Outcomes for Pets Consulting, works with large animal shelters to decrease euthanasia rates and shorten animal stays. She uses A.I. to come up with ideas:
Give me 50 ideas for adoption promotions focused on senior pets who have lost their homes
One of them was:
Lifetime of Love: Side-by-side then and now photos of pets who have lost their longtime families and a call to give them love again.
“We’re definitely going to do that one,” she said.

13. Check legal documents in a D.A.’s office

Chris Handley

Director of operations and chief of innovation

Mr. Handley works in the Harris County District Attorney’s office in Houston, the third-largest jurisdiction in the country. He recently built a custom large language model that helps prosecutors and the police avoid errors when filing arrest paperwork.
After booking someone, the police type up their account of events, and that report goes to the D.A.’s office. It then goes straight into Handley’s L.L.M., which does a series of checks, looking for issues a judge might later catch — a typo, a missing piece of information about the arrest, a slightly incorrect charge, a full name of a sexual assault victim rather than initials, all of which could and do slow the process.
“When people think of A.I., they think of chatbots, or they think of Skynet, facial recognition,” Mr. Handley said. “We're not doing any of that. For us, there's so much low-hanging fruit. Just making sure our paperwork doesn't have mistakes on it.”
They’ve been testing the program and working on a larger rollout. A colleague tried it and said it reduced her work time by 50 percent. Mr. Handley now wants to pilot a model that could work with police officers while they’re first filing charges from the scene.
But the models are not useful for everything yet. He trained one model on case law and asked it about one of his cases.
“It very confidently went on and on about these made-up facts that had nothing to do with my case,” he said. He deleted the model.

14. Get the busywork done

Sara Greenleaf

Project coordinator

Ms. Greenleaf works for a health insurance consultant, and many of her duties are administrative: drafting contract documents, scheduling meetings, editing PowerPoint slides, signing people up for conferences, and so on.
She turns to ChatGPT to get all those tasks checked off. It helps her summarize “action items” from a long chain of emails; proofread her emails; create contract templates; search through long documents like benefit summaries; and compare documents when she suspects there might be small differences.
But it wouldn’t help her with her first career: pianist.
“If I hadn’t had this experience of working in an office, I think I’d be mostly horrified by A.I.,” she said. “I never use it in my creative life, and am very worried about its implications for the arts.”
And it hallucinates sometimes, she added, so she checks and cross-references her results carefully. “A.I. is not doing my work for me,” she said. “Most of the time it’s just getting me started with a task or prompting me to think of something in a different way.”

15. Review medical literature
Michael Boss

Medical imaging scientist

Mr. Boss oversees the use of M.R.I., CT and other scans in clinical trials, ensuring that imaging is done to protocol and working on standardization efforts. He’s reading medical literature nearly every day — and he uses ChatGPT, Perplexity, Undermind and more tools for that.
That means he can say something like:
Identify relevant imaging biomarkers and their reproducibility as evidenced by ICC, CCC, or wCV in primary prostate cancer as used in interventional studies.
And get back a result like:

Image

He doesn’t rely much on A.I. summaries; instead, the chatbot’s response gives him a sense of what scientific literature might be relevant to his question and worth reading in full.
“Using A.I. has profoundly sped up the process,” he said.
He’s learned to be very careful about chatbot summaries in particular. Recently he asked ChatGPT a question about M.R.I. diffusion, an area where he’s made some contributions. The response misattributed his work to a person who appeared not to exist — frustrating for a scientist whose reputation is built on credit, and alarming for a chatbot user.
“I find that ChatGPT's current approach is very much a groupthink summary, if you take it at face value,” he said. “That is potentially dangerous. However, taking its results with skepticism, you can use the results to seed additional searches, or additional prompting to get to the right answer.”

16. Pick a needle and thread

Nicole Goldman

Fiber artist

For Ms. Goldman’s work as a fiber artist, she often needs to know the best stabilizer to use, or the best glue, for a particular project.
“I've used Claude to resource materials, to help me decide what size needle and thread I should be using for a particular project, to give me technical information,” she said. “Where I might have ‘Googled’ before and had to sort out a huge variety of information and sources, this definitely cuts right to the chase and organizes the information so much more quickly and succinctly.”
Recently she asked Claude for a didgeridoo pattern. The final product ended up more like a bird, she said, but she didn’t mind — she considered it a collaboration with the A.I.

Image
Nicole Goldman

17. (More politely) let band students know they didn’t make the cut

Deb Schaaf

Music teacher and jazz director

Ms. Schaaf is a music teacher in a competitive high school jazz program. Not everyone can make the cut, and she has to deliver the news. She uses A.I. to help let down her students firmly but gently.
“I discovered my favorite prompt after asking the A.I. for more diplomatic language in a message about the need to fire a drummer,” she said.
Her initial attempts were “so padded with feel-good fluff that it became nearly twice as long and obscured most of the issues.”
After some back and forth, she finally landed on a prompt that worked:
Make it more Gen X
The results were what she was hoping for, “a much more direct message that was thoughtful, but didn’t sound like Mr. Rogers on molly.”

18. Help humans answer more calls at a call center

Thor Dunn

Chief, Customer Service Center

California’s Department of Tax and Fee Administration is responsible for tens of billions in state revenue each year. And because taxes are complicated, its main call center gets hundreds of thousands of calls a year. That’s where the department thinks A.I. can help. It’s testing a system using a version of Claude trained on state data.
During a customer service call, the A.I. reads a live transcript and suggests an answer. The human agent on the call can then click through to the reference material linked in the A.I.’s answer, and decide whether it’s right. The goal is to help the real people answering calls sift through material on more than 16,000 pages of reference material on taxes and fees.
Early tests showed a 1.5 percent improvement in the time it took to process calls, and Mr. Dunn thinks that could rise as call center agents become more familiar with the system. The model is working better now than it was even earlier this year, thanks to improvements in Claude.

19. Help translate lyrics from the 17th and 18th centuries

Richard Stone

Orchestra co-director

Mr. Stone co-directs the Philadelphia Baroque Orchestra, and as part of that job translates lyrics for renaissance and baroque vocal works. He has knowledge of the main singing languages — Italian, French, German and Latin — but only the way they are currently spoken and written. Versions from hundreds of years ago were different, and less standardized.
“The A.I. helps me to gain the experience that my conservatory training didn't include,” he said. He does all of the initial translation on his own and uses A.I. more as a “consultant or a tutor” to check his work.
When there’s a passage he’s unsure of, he’ll show both the original and his translation to the A.I., going back and forth to come up with something he feels more confident about.
“The important thing is to maintain a reserve of skepticism,” he said. “It will make things up, so when I get suspicious I will quiz it.”
Mr. Stone was recently trying to crack this phrase in Italian:

Image
Richard Stone via Stift Heiligenkreuz Musikarchiv

The first word gave him trouble.
“I transcribed the Italian word ‘pramo,’” he said. “I invested so much energy on my own and working with the A.I. on figuring out what ‘pramo’ could possibly mean. I eventually recognized the word as ‘bramo’ (I desire/wish). It could have been an unattested form of the word or an outright scribal error. That sort of intuitive leap is not something the platform I use is remotely good at.”
And the final translation?
Bramo che sia così per tuo contento.
I wish it to be so for your happiness.

20. Explain my ‘legalese’ back to me

Deyana Alaguli

Lawyer

Ms. Alaguli uses this prompt with Google Gemini to help see if her legal writing is confusing:
I understand you're not a lawyer, tell me what a layman might understand from this paragraph
You can’t count on A.I. to accurately interpret legal or technical jargon, she said, but it can be great for helping build your case. She also uses it to prepare for hearings and to help practice closing arguments.
“It can understand your arguments, or help you anticipate holes in your case, better than a colleague can,” she said. “It's faster, unbiased, not worried about hurting your feelings.”

21. Detect if students are using A.I.

Matthew Moore

High school English teacher

Mr. Moore uses Magic School A.I. and ChatGPT to generate worksheets, rubrics, images and educational games for his various English classes. And his students are using it, too.
“It does feel hypocritical to tell them not to use it when I am using it,” he said. But he turns to A.I. to make sure they are using it in permitted ways.
He remembers a ninth-grade student who turned in “a grammatically flawless essay, more than twice as long as I assigned.”
“I was shocked,” he said. “And more shocked when I realized that his whole essay was essentially a compare and contrast between O.J. Simpson and Nicole Brown Simpson.”
That was not the assignment.
“The A.I. detection software at the time told me it was A.I.-generated,” he said. “My brain told me it was. It was an easy call.”

Image
Matthew Moore

So Mr. Moore had the student redo the assignment … by hand.
But, he said, the A.I. detectors are having a harder time detecting what is written by A.I. He occasionally uploads suspicious papers to different detectors (like GPTZero and QuillBot). The tools return a percent chance that the item in question has been written by A.I., and he uses those percentages to make a more informed guess.
“We are, likely, less than a year away from when teachers cannot reasonably discern between A.I. writing and student writing,” he said. The more sophisticated A.I. papers can imitate the writing level of a high school student. (Some students even feed their A.I. papers into another website like Humazine A.I. to try to make the writing feel more natural.) “Once we pass that threshold, we will no longer be able to accept any typed essays or writing assignments from students. It will all have to be under testing conditions, or they will have to write it all by hand.”

https://www.nytimes.com/interactive/202 ... -jobs.html
Post Reply