Highlights
id983566045
half a billion people around the globe are using ChatGPT on a weekly basis. It’s the fastest adoption of a new technology in all of history. Yet even as technologists and pundits continue to breathlessly extol the transformative power of this technology, millions of us still don’t understand what modern AI really is, how it works, or how to get the most out of it.
id983566046
Given the way that society always responds to any new labor-saving and productivity-enhancing innovation by demanding ever more of us, we’re not far from the day when working without the aid of AI will be like insisting on using a horse and buggy for our morning commute.
id983566047
Adopting a set of straightforward mental models of what AI is and how it works will allow you not only to navigate but perhaps also to flourish in the next few years or even decades.
id983566048
Teaching the principles of AI is necessarily a bit more complex and benefits from the high-level perspective I’ve been privileged to gain from peering into the lives and work of so many. This is because modern AI is so broad in its abilities and is used in ways that are so peculiar to such a wide variety of tasks. It’s what historians call a “general-purpose technology,” one that impacts countless other activities and industries, such as the automobile or the internet before it.
id983566049
Despite what you might have heard, AI isn’t here to squash creativity or usurp the tasks we enjoy doing most; it’s here to take over tasks defined as “toil”: the repetitive, tedious stuff no one enjoys doing.
id983566050
The human brain loves narrative—we truly are built for it—and it’s the way we structure our memories. Stories are the tools we use to compress our messy, piecemeal understanding of how the world works into a form we can easily recall. If the memory consolidation that happens when you sleep is your brain’s compression algorithm, stories are its main product.
id983566051
Remember my beef with the term AI? In the pages that follow, I will succumb to convention and call AI by its generally accepted name. But in my own mind, I think of it as “simulated intelligence,” and perhaps you should, too. Simulated intelligence reminds us that while AI can sometimes fool us into thinking it’s intelligent in the same ways that we are, at a fundamental level we’re dealing with something very different from us. It emphasizes just how alien it is, compared to ourselves, our pets, even the things that creep and crawl on this earth and that I spent the early part of my twenties investigating during my brief stint as an invertebrate neuroscientist in the lab of an MIT-trained electrical engineer.
id983566052
There will be holdouts, people who remain convinced that the usurpation of their cognitive labors by AI will compromise the quality of their work or could lead to their own irrelevance. But for most of us, delegating to AI the parts of our jobs that are tedious and time consuming will be liberating.
Pero cuál es el equilibrio? Cuál es la línea que distingue entre lo que es sensato delgar y lo que sería dañino hacerlo? Analogía del ejercicio como sustituto para las externalidades positivas del trabajo manual, que hoy en día debemos suplir.
id983566053
People who use AI well know you can’t simply ask AI to do your job for you, whether in law or any other field. Yet there are many, many tasks in which AI can help lawyers—as long as they use it appropriately.
Esto involucra tratarlo como un asistente: puede ayudarte a hacer tu trabajo, pero tú eres quien debe asumir responsabilidad por este.
id983566054
“I feel like a lot of lawyers are looking at AI as ‘How can it do my job for me?’ and not ‘How can I enhance my job performance with it?’ ” she said. “At the end of the day, it’s a tool. As exciting as AI is, using it to enhance what you are already doing is key for me.”
No hace tu trabajo por ti, pero aumenta tus capcidades.
id983566055
the human jobs that AI might take over completely are for the foreseeable future the kinds of tedious and borderline mindless jobs that companies outsourced long ago, and the potential further impact of AI on those jobs in advanced economies such as the United States is low.
id983566056
The insurmountable limitations of today’s AI systems mean that they almost always require, at minimum, human supervision. And in the overwhelming majority of cases, AIs are merely tools requiring that we actively wield them. Power tools may have been one of the defining innovations of the Industrial Revolution, but no one would call the steam drill a robot, much less an autonomous one.
La clave de este párrafo está en su énfasis en la agencia que el sujeto despliega en relación al modelo.
id983566057
Ethan Mollick, a professor at the Wharton School who is an obsessive documenter of the ever-evolving abilities and limitations of AI, has pointed out in his writing and our conversations that the more you know about a field, the more you can get out of today’s cutting-edge AI. That’s because experts, whether they’re coders or MDs, are able to ask AIs better questions and then continue following up with more prompts, pushing the AI ever deeper into their databases of knowledge. They’re also—and this is critical—better able to evaluate an AI’s responses and recognize when the AI gets something wrong.
Segunda ley: La IA ayuda más al experto porque sabe hacer mejores preguntas y evaluar mejor la calidad del producto.
id983566058
This is the secret of the overwhelming majority of today’s AI services: They’re all using the same handful of AI “minds.” Whether you’re talking to an AI when ordering at the drive-through of a fast-food restaurant, your therapist is using AI to summarize their notes from your latest session, or your lawyer is using Filevine’s deposition copilot, they’re all running on OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, xAI’s Grok, Amazon’s Nova, or an open-source model from Meta, Mistral, or DeepSeek. Part of the reason they’re using these “frontier” models, aside from the fact that they are the best, is that they’re also increasingly interchangeable. There’s even evidence that as models become more capable, their outputs converge on the same responses.
Dato importante con el que la gente no está tan familiarizada.
id983566059
The continually falling price of access to today’s AIs and the way the companies building them keep pace with one another in terms of features and abilities mean that even the most sophisticated AI models are becoming commodities. The definition of a commodity is, after all, that it’s interchangeable, the way one barrel of oil or bushel of wheat is as good as any other.
La commoditización de la IA es una consecuencia directa de la competencia de los laboratorios.
id983566060
in the future, AI will be a feature and not a product. For many of us, it will be invisible, wrapped up deep in the code of services and applications that we already find familiar.
Totalmente de acuerdo.
id983566061
The definition of technology, I have learned from many years of covering it, is whatever we are still coming to grips with. Once its evolution has slowed enough for us to take it for granted, it’s merely infrastructure. To get the most out of AI is, more than anything, a matter of finding the software tools that will integrate it most seamlessly and leverage it to accomplish things that no other app or service can.
El valor está en la customización y en nuestra capacidad para desplegarla.
id983616614
Ever since the debut of ChatGPT in November 2022, a steady flow of research has backed up the idea that, given the choice, companies will use AI to perform low-level tasks they used to farm out to freelancers and junior employees.
id983630917
Then there are the endless apps and services that are basically just app- or web-based interfaces that provide access to these frontier models and, by providing additional material for the AI in a consistent but largely invisible way, may modify their behavior in meaningful ways. These are built by startups with far fewer resources than the big AI labs that produce foundation models, but their creators have some idea about a service they think they can build with them. (As I outlined in the previous chapter, these services are the way most people will use and encounter AI in the future.)
Este es el tipo de cosas que Grupo Educativo podría construir.
id983630918
there are limits to how well the memory of today’s AI chatbots works, and understanding the nature of those limitations is key to getting the most out of them. This process of tuning an AI’s outputs is how we make them our own and is essential to getting these systems to do useful work.
id983630919
What I learned from the tender, formative years I spent learning about and doing research on nervous systems at various levels of abstraction, from individual molecules all the way up to whole behaving animals, can be summed up in a single sentence: Neuroscience is a pretty shit way to try to understand the human mind. The problem isn’t the science, which is excellent; it’s the complexity. If you want to know why an animal or person did a thing, you are never in a million years going to be able to reconstruct every impulse of every nerve that inspired that action, thought, or utterance. Even if you could capture a full picture of all of the neural processing that led to a behavior, our human minds simply don’t have the capacity to grasp it. Fortunately, there’s an alternate way to understand why people do the things they do, and it works equally well for understanding why modern AIs do the things they do: cognitive psychology. In humans, cognitive psychology is the study of how our minds work, at a level of abstraction that’s just below conventional psychology. How do memory and attention work, and what is their interplay? How does language inform our view of the world? What are the nearly two hundred biases that distort and inform our decision making? These are all questions asked by cognitive psychologists. The nascent field of cognitive psychology, but applied to AI, is known as “machine psychology.”
Una versión basada en la experiencia de la expresión: si nuestro cerebro fuese lo suficientemente simple para permitirnos entenderlo, seríamos muy simples para poder lograrlo.
id983630920
First, there’s what today’s generative AIs actually are. They aren’t—and I cannot stress this enough—intelligent in any meaningful sense of the word. The most simplistic definition of how they work is that, given a series of words, they predict the next most likely word. The chat interface of an AI chatbot is really just a clever design choice laid atop this basic fact. When you interact with the large language models that underpin today’s AI chatbots—and countless other AI tools—what you’re really doing is, in essence, writing a collaborative story with them. Given such a simple mechanism, how on earth can today’s large language models predict the next word in a sequence in a way that allows them to interact with us in a way that appears intelligent? The facile answer favored by people whose billion-dollar fortunes depend on this being true is that in some sense these models are intelligent. This is very convenient for startup founders seeking unheard-of amounts of investment to build AI supercomputers, as it implies that if investors just give them enough billions, their silicon pets will soon demonstrate that they are in fact as intelligent as we are in ways that are irrefutable.
Stochastic parrot
id983704949
Today’s AIs simulate intelligence by learning impossibly long lists of rules of thumb, which they then call upon to generate new content.
Esta es la misma posición de Searle. Suena razonable, pero para hacer esta distinción con conocimiento de causa, debo ser capaz de explicar qué caracteriza al razonamiento humanol en contraste a la predicción estadística del modelo.
id983704950
The “bag of heuristics” way of working for today’s AIs applies to not just large language models but all of the kinds of AIs that are based on the same underlying collection of algorithms. This underlying model is called a transformer. Transformers can be applied to a dizzying array of fields and types of data, from mathematics and genetics to the movements of robots and autonomous vehicles. While transformer models can learn from many different kinds of data—and with minimal human intervention—it’s easy to see why their results are often described as brittle. Many such models exhibit unpredictable behaviors and, worse yet, are unpredictably unpredictable. That is, you rarely know in what circumstances they will break and generate a nonsense response until they do. This is annoying in a chatbot and disastrous in a self-driving car. No matter how large the bag of heuristics may be, it’s apparent that today’s AI models have trouble coping with situations that fall outside the data they’ve used to come up with those rules of thumb in the first place.
Lo que aquí aparece descrito a nivel de titulares debo ser capaz de entender con una profundidad suficiente.
id983704951
Unlike a human or an animal, even today’s most cutting-edge AIs are fixed things. They are trained on vast quantities of information in giant AI-making factories called data centers.
Y, si entiendo bien, una vez que el modelo se entrena no se modifica de manera sustantiva, más allá del contexto y las instrucciones que le damos en nuestras interacciones.
id983704952
Once that training is done, a model is fully baked. Its tendencies can be tuned in various ways, but again, that all happens at the AI factory. Once the average person is interacting with one of these systems, all of the endless connections among its various digital neurons, called weights, are set in stone. I have often found that the adage “You can’t teach an old dog new tricks” isn’t true—but it is absolutely true of AIs. The only way we, the typical end users, can modify the behavior of an AI is by feeding it information for it to “think about” while it’s running for us. When an AI model is running for us, it’s called inference, and it’s distinct from the formative training it underwent at the data center. All language models have what’s known as a “context window” into which is crammed the entirety of the current conversation we’re having with that AI, plus the contents of any documents or web pages we’ve handed to it, plus whatever is in the stable but modifiable “memory” of that AI chatbot. If you think of an AI’s context window as its short-term memory, holding on to all of that at once is quite a feat.
La versión extendida de lo que puse en el último comentario.
id983704953
for every tenfold increase in the number of words you give an AI chatbot, the number of mathematical operations it must perform to “think” about those words increases by a factor of about 100.
Intuitivamente, esto es porque el entrenamiento debe evaluar la relación recíproca entre cada uno de los tokens disponibles.
id983704954
The makers of modern AIs solve the problem of their runaway demand for computing by limiting how much they have to remember at any one time. If you’re a developer and willing to pay for more computing power, you can get some really gigantic context windows; Google’s Gemini, for example, has an enormous context window of up to 1 million “tokens,” where each token corresponds to roughly one word.
Al parecer, lo que comenté en el pasaje anterior no sólo aplica a la fase de entrenamiento, sino que también al uso.
id983704955
AIs’ forgetfulness is just one of the reasons that we can’t yet rely on them to do our jobs for us and must review everything they produce. The other big issue is that AIs lie constantly. This phenomenon, which has gotten a lot of attention in the media, is called “hallucination,” though that’s a bit of a misnomer. A hallucination is what happens when we perceive something that isn’t there; it’s our imagination, freed from the bounds of our senses. The word implies an altered and pathological state of mind. But for large language models, hallucinating is simply how they operate. In order to generate any response at all, their hundreds or thousands of layers of artificial neurons have to come to a consensus on what to produce, and the process by which they do that means that the system has no ability to judge the truth or falsity of what it’s spewing.
Quizás aplica la misma lógica del predictive coding, según la cual la experiencia consciente es una alucinación controlada.
id983704956
One 2025 study of popular large language models found that even when AIs were asked simple, easily verifiable questions based on news articles supplied to the model, their answers included made-up information between 15 and 39 percent of the time.
La tasa de alucinación sigue siendo alta.
id983704957
Large language models fib so often and so confidently that I’ve learned not to rely on them as a source of any kind of information.
id983704958
It’s entirely possible that today’s AIs are, in essence, essentially gigantic, high-speed swarm intelligences. It’s as if they are enormous ant colonies tasked with learning from and responding to stimuli such as human language. As Yann LeCun, the AI pioneer who is now the chief AI scientist at Meta, once told me, “We are used to the idea that people or entities that can express themselves, or manipulate language, are smart—but that’s not true. You can manipulate language and not be smart, and that’s basically what LLMs are demonstrating.”
id983704959
Another way to leverage AI is to ask it to help you organize your thoughts. As a writer whose continued employment depends on being better at my craft than the majority of other writers, I can’t imagine using AI to write for me. But as a journalist and writer of nonfiction, the idea that an AI can act as a sort of research assistant and librarian is enormously appealing.
Nuevamente, la diferencia entre uso pasivo e intencional de los LLM.
id983704960
While it definitely helps to communicate with them as clearly and precisely as possible, if you have reasonable communication skills, just typing at them as though they’re smart human assistants is all that’s required.
id983704961
This is one reason why Leanne describes herself as an AI “coach” rather than consultant. The overarching lesson she tries to convey, based on her own experience, is that getting over our reluctance to experiment with new AI-based tools is step one—and also steps two, three, four, and so on, as working with AI becomes a regular practice rather than a one-and-done new skill we acquire.
El uso de la IA es una hailidad que se cultiva enactivamente.
id983704962
Bag of heuristics: A hypothesis about the way modern AIs function, which remains controversial among some experts but is backed up by a growing body of research. The idea is that rather than creating an abstract, relatively compact model of the world, the way humans and other animals do, today’s transformer-based AI systems learn unfathomably long lists of rules of thumb that they run all inputs through before generating output.
id983704963
AI systems, such as the large language models that power ChatGPT and its competitors, start out as gigantic masses of artificial neurons in what is usually a completely untrained, uninformed, tabula rasa state. During training, a complicated process that was first fully described in a paper by Google researchers in 2017, these transformer models “learn” the underlying structure of the data they’re exposed to. (In the case of today’s large language models, this is based on enormous amounts of text scraped from the internet.) Once trained, transformer AI models can be tuned, a process that adjusts the values of the variables they contain, but this tuning process is not available to end users, who must content themselves with the state of a model as it’s made available to them.
Los usuarios finales sólo interactuamos con el queque salido del horno.
id983704964
When we use a transformer-based AI such as ChatGPT, our inputs are run through the tens of billions of variables inside the AI, which then spits out what it thinks is the most appropriate response. This is called inference. This is a statistical process—a bunch of arithmetic running on microchips, like everything else we do with computers—and should be regarded as mundane rather than mysterious.
La inferencia es el cálculo que hace un modelo de las palabras más probables de continuar la instrucción, en base a los pesos de sus neuronas calibradas durante la fase de entrenamiento.
id983704965
The AI you’re most likely to hear about these days is but a single small part of the giant tree of techniques and algorithms that make up modern artificial intelligence. If deep learning is a big bough jutting out from the trunk of modern AI, transformers are limbs sprouting from that bough, large language models are branches along the length of those limbs,
id983704966
As I outlined in the previous chapter, at the heart of nearly all of the really powerful AIs that regularly make headlines is a collection of techniques and algorithms known as a transformer. It’s hard to overstate the importance of the transformer in the history of AI. It’s definitely a top-five all-time breakthrough. Some would rate it as the most important breakthrough in the history of the field. And it was invented at Google and revealed in a 2017 article entitled “Attention Is All You Need.” The title is a reference to one of the key breakthroughs in transformers, an “attention” mechanism that allows the artificial neural network at its core to come up with those endless rules of thumb that underlie modern generative AIs.
Tengo que lograr entender este mecanismo.
id983704967
In 2018, the company came up with the world’s first transformer-based large language model, called Bidirectional Encoder Representations from Transformers, or BERT.
id983704968
In 2021, one of the authors of the transformer paper—the researcher who was enticed to return in 2024 for $2.7 billion—left the company because Google refused to release the chatbot he’d been working on. A year later, OpenAI released ChatGPT, and so the moment when transformers burst onto our attention will forever be attributed to OpenAI and its “ChatGPT moment” and not to the company where the technology was invented.
id983704969
Imagine yourself setting out to perform a particular task with an AI chatbot, the way Leanne, in the previous chapter, often does—say, brainstorming a topic for an article or social media post. “First you use AI to come up with an idea, help you do research, and make an outline,” said Moskovitz. “You might come up with a systematic process for doing that.” So far, so good; this process is known as “prompt engineering,” which is a term consultants and tech influencers use to obscure the straightforward process of “giving a chatbot instructions as you would a talented but overly literal intern.”
Importante aclaración: hablar de prompt engineering es puro vender humo.
id983704970
The unpredictability of generative AI—which, like hallucination, is inherent in how these systems operate—is yet another reason that today’s AIs need scaffolding. That scaffolding is built out of conventional, rules-based systems.
El LLM es cemento fresco, tiene mucho potencial pero requiere de estructura externa para proveer valor de manera consistente.
id983704971
“Dealing with freelancers all over the world, we found that unless you really understand things and can ask the right questions of an AI, the output is garbage,” Sean Jackson, the founder of PouncerAI, told me. “Writers have frameworks; they know about things like a lead paragraph. But if I’m a programmer, I don’t know what a framework is—so how will I know to ask the AI the right questions?” Freelancers, he added, “just want the answer—but with AI, the hard part is figuring out what the question is going to be.”
Qué tipo de andamiaje se requiere para proveer de valor en el mundo educativo?
id983704972
For John, PouncerAI has been a game changer. Today, he is able to apply to between 7 and 10 search marketing jobs an hour, for a total of up to 160 a month. Without AI, his response rate would be a fraction of that, and even with the help of ChatGPT, all the toggling among tabs to gather all of his information into a suitable prompt would take significantly longer and wouldn’t yield the same results.
Podríamos hacer esto para postular a licitaciones.
id983704973
Even the most die-hard devotees of using AI to write software, using the most powerful models available, have found that using AI to code requires constant supervision and a deep knowledge of programming in order to keep it from going down inappropriate rabbit holes or making mistakes that could cost the companies relying on this code dearly in terms of both inefficient and vulnerable systems. Thus, it seems that even in the area where AIs have found the greatest utility and are undergoing the most rapid evolution, we are nowhere near the point at which building software with AI is “Set it and forget it.” Notably, as with copywriting and concept art, these systems do allow a new kind of prototyping. Even nontechnical people can now use tools such as Bolt, Lovable, and Replit to use plain-English prompts to knock out working prototypes of new apps and online services. These tools don’t yield systems that are robust enough to be shipped to end customers, which shows the limitations of the vogue for “vibe coding.” The simultaneous rise of code autocomplete tools that make experienced coders more productive and vibe-coding tools that can enable everyday folks to create working prototypes of software does appear to be having an impact on the demand for junior and entry-level coders. This seems less like a shift of the industry away from needing people to create and maintain products and services and more like a shift in what skills the tech industry needs those people to have.
Aún se requieren programadores, el tema es que se requieren menos asistentes (entry level).
id983704974
The fastest and most reliable way to build novel systems that leverage AI is to use AI only where appropriate, within existing systems. Often, that means using as little AI as possible.
id983704975
Good prompting is, in many ways, indistinguishable from good programming. Both require breaking a problem down into discrete steps, being clear in one’s communication, and understanding the limits of the system being prompted.
id983704976
Our innate fear of missing out or falling behind is something that those pushing their AI services and tools are naturally inclined to capitalize on. Being clear about our goals and seeking tools that serve our needs rather than bend us to their ways of operating is, as ever in technological systems, essential.
A propósito del hype y la agencia.
id983704977
Knowing that chatbots were based on the same underlying technology as Smart Compose, it was easy for me to dismiss them as souped-up versions of what I’d already become accustomed to. I wasn’t alone in my skepticism. On the internet, a derogatory name for what chatbots do cropped up. People started calling language-based generative AI “spicy autocomplete” or “autocomplete on steroids.”
Loros estocásticos
id983704978
But—and this was the crucial mental breakthrough for me and key, I believe, to convincing other AI holdouts—as the months went by I observed how AI continued to sneak into tools I already used. Observing that, I began to feel a little less precious about my work. I came to realize that no matter how creative your work is or how good you are at it, there is something about every job that is repetitive, tedious, draining—and best handed off to AI. The data backs this up: By early 2025, just over two years after ChatGPT debuted, in a survey of American workers by Gallup, 45 percent said that AI has helped them become more productive.
Cuál es el punto crítico? Mostrar la eficiencia ganada al delegar tareas rutinarias?
id987542368
Semantic search is not merely one of the things that large language models do; it’s fundamental to how they do everything.
id987542369
Every word in a large language model is stored not just as a string of letters but as a vector, which is a sequence of numbers with both a magnitude and a direction.
id987542370
While the human mind cannot easily visualize dimensions higher than three—that is, length, width, and height—in mathematics and computer science, there are no such constraints. Spaces of effectively infinitely many dimensions can be represented. By simply adding more numbers to the end of the list of numbers that represent a vector, it’s possible to create a database of vectors of however many dimensions we like. Creating a high-dimensional space for our vectors is important for what happens next. In this multidimensional space, we’re going to plot the location of every word in a given language in relation to every other word in that language.
id987542371
Every sequence of numbers associated with a given word is what’s known as a “word embedding.”
id987542372
OpenAI’s GPT-3 represents every word in English with 12,288 dimensions. In other words, every word was assigned 12,288 different numbers.
id987542373
How, you might ask, is the relationship between every word in the English language and every other word in the English language determined? The simplest answer is that artificial neural networks are trained on large bodies of text—say, the entirety of Wikipedia. Researchers have been doing this sort of thing since at least 2013, when researchers at Google first published on the subject.
id987542374
This kind of AI is what’s known as “grounded” in the data that’s fed into it. In a way, it uses the Gemini large language model as an advanced—that is, semantic—search engine, capable of finding what I might want in documents and taking me directly to it.
id987542375
At its base, all of this is enabled by the vector database underpinning today’s AIs. Knowing that this is what’s under the hood can help us understand what computer scientists mean when they say that AIs are mostly doing an approximate search of their vast knowledge—or the documents we dumped into them—rather than producing new knowledge or truly reasoning.
Si bien la comprendo intuitivamente, no sería cpaz de explicar por qué esta infraestructura vectorial implica que el modelo no “razona”.
id987542376
Large language models that talk to themselves in order to “reason” through a problem. They are often bedeviled by issues that arise from problems with the large language models on which they are based; for example, they can be brittle and hallucinate, resulting in chains of “thought” that take reasoning models down rabbit holes that are unhelpful or irrelevant to the problem at hand.
id987542377
AI’s capabilities are limited by its nature, which is more of an approximate search engine for its vast database of knowledge and rules of thumb than a system that is capable of humanlike reasoning. By feeding AIs as much information—that is, context—about the task at hand as possible, we can use approximate search to accomplish tasks that would otherwise require genuine reasoning. We have to spend time with new AI tools before we can understand the trade-offs they require—usually between the quality of their work and the time they save us. When trying a new tool, there is no substitute for this lived experience.
id987542378
If conventional generative AI is a tool for knowledge work that requires a human to wield it—in our use of it, little different from a word processor or web browser—agentic AI is the industrial robot of this kind of work. Able to operate on its own given the right guardrails and a narrow enough set of tasks, agentic AI is beginning to do to knowledge work what robotics and related automation have done to blue-collar work.
Buena analogía.
id987542379
The simplest definition of agentic AI is that it’s what happens when today’s generative AIs are given access to the same kinds of tools we are—everything we can access through the internet—and are tasked with using them. The canonical example is chatbot-based systems for customer service, but agentic AI is making rapid inroads into the automated handling of cybersecurity breaches and myriad systems for handling back-office tasks such as paying invoices.
La claves está en el nombre: agencia. Los agentes son LLMs que hacen cosas (en lugar de sólo decirlas).
id987542380
systems of knowledge management, such as the ones I use to write my columns, can help individuals organize masses of text and other unstructured data. But where they really shine is in their ability to collect and make accessible masses of information for teams and even whole organizations.
id987542381
The novel-length documents that the company’s own reps used to have to search through when trying to help customers solve problems have been cleaned up and chunked into the smallest possible articles on individual issues and their remedies. Creating an AI system that can serve customers faster has required that the company’s technicians take on a new role, but they’re no less vital to the company’s success—and that of its customers. The need for BACA’s technicians to invest hundreds of hours in updating the company’s database is surprisingly common in companies I’ve talked to, and there’s a simple reason for it. A well-worn principle of software engineering is the concept of “garbage in, garbage out.” The first instance of this phrase dates all the way back to the 1950s.
Qué transformaciones similares se requerirán en las escuelas? cuál será el rol de UTP?
id987542382
“Garbage in, garbage out” applies to AI just as much as to any other software system. The most common answer I hear when I ask engineers how they reduce the rate of hallucination in their AI systems is that they make sure they feed it only the highest-quality information.
Este fue de los principales aprendizajes desarrollados en Proyecta Chile.
id987542383
Researchers at Microsoft have shown that it’s possible to create large language models that are the equal of much bigger and more resource-intensive ones simply by training their models on higher-quality data. The bottom line is that careful selection, editing, and prescreening of the data that is fed to AIs are both essential and key to creating systems that are different and more useful than competitors’.
id987542384
Bob’s team at Wiley used Salesforce’s AI Agent Builder to create a chatbot. While this tool can use code, it also allows people to build agents with no code at all. This ability is hardly unique to Salesforce; Microsoft, Box, Asana, and many other established enterprise software companies offer comparable tools. The result is that people who are logical thinkers and clear communicators are now building with these tools, even if they don’t have any coding ability or experience.
Podría indar en las capacidades de copilot y agent foundry.
id987542385
In the pre–generative AI era, which I spent more than a decade chronicling, the key to making systems that could really move the needle for a company was hiring data scientists to gather, organize, and clean up data. Almost all data needed to be numerical or had to be converted so that it was. Data needed to be as clean as possible, because the algorithms that were processing it—many of them just glorified statistical tools borrowed from other fields—were brittle. But something like 90 percent of the data possessed by most companies is unstructured data: text, images, presentations, video, audio recordings, and the like. In other words, the vast majority of data possessed by most companies—and most individuals—is the kind that humans can generate and process. The real key to the modern AI revolution is that today’s AI can take data that used to require slow, expensive humans to process and process it in the blink of an eye.
id987542386
At this point, a question may be forming in your mind: If the future of AI is that we grant it ever more abilities and autonomy, doesn’t that contradict the Third Law of AI—that in its ultimate form, most people will use AI most of the time as a feature of other software and services rather than as a product unto itself? Here’s why that won’t be the case. As we’ve learned from the history of automating tasks on a computer—from scripting languages to automating robotic process—these tools are wielded primarily by people whose job it is to write software and build IT systems. The reason is simple: No matter how easy you make it to program a computer, noncoders just don’t have the time or interest. An iron law of tech adoption is that the simplest, most user-friendly way of interacting with a new technology will win the race to mass adoption every time. The future might be full of countless AI agents, but they’ll be operating in the background, and when they do need us to step in and evaluate their work or steer their operations, it will be in ways that minimize the demands they make on our attention.
Esta observación contrapesa la idea de que cada uno desarrollará sus propios software.
id987542387
Agentic AI: Pretty much any AI that has the ability to do anything beyond responding to prompts with images and text. Definitions
id987542388
Knowledge base: A library of knowledge that must be updated, pruned, and tended, but thanks to AI, doesn’t require the kind of structure that previous databases did.
Esto se puede desarrollar en GE.
id987542389
At one point I asked for clarification about which elements of the ads had been generated with AI. Oh, they responded, all of them. Everything in the ads? Yes, the ads had all been completely generated with AI. That was the moment I realized that none of us, no matter how savvy we think we are about detecting the telltale signs of AI-generated content, can trust any image or video we see on the internet ever again.
id987542390
“It took experimentation, trial and error, learning what works, and then scaling what works right,”
La forma adecuada de entender cómo utilizar la IA: con experimentación.
id987542391
What makes AI useful for creative teams is that in the time they could previously have produced a couple of ads using conventional tools—stock or in-house photography, Photoshop, humans brainstorming—they can now make ten.
El prototipado es un uso efectivo de la IA en el trabajo creativo.
id987542392
One of the challenges of talking about AI is that people have very different definitions of it, depending on their history with it. For people whose first direct encounter with AI is ChatGPT, AI equals generative AI. This is a problem, because generative AI, as big as it has become and as rapidly as it’s evolving, is actually a tiny speck in the vast ocean of techniques that qualify as AI.
Es importante tener clara la taxonomía del campo en general.
id987542393
If we spread the taxonomy of AI across a timeline of the history of computing, almost the entire history of AI is about the creation of everything but generative AI. Nearly the whole timeline is full of variants and elaborations of the other type of AI: discriminative AI. Discriminative AI is what almost everyone was talking about pretty much right up until the moment that ChatGPT and image generators burst onto the scene in 2022. (AI researchers were working on generative AI well before that, but few outside their specialty knew about it.) It is the type of AI used in image recognition, for example, in the systems that can identify faces in your photographs. It’s also used in sentiment analysis, as in what marketers use to determine if people are saying good or bad things about a brand, product, or politician on the internet. It’s used in fraud detection, such as the kind that automatically rejects suspect charges on your credit card. It’s also used in medical diagnosis, spam filtering, and the list goes on. Related, some might say interchangeable, terms for discriminative AI are “predictive analytics” and “predictive AI.”
IA predictiva como el antecesor histórico de la generativa.