The World Inside Out. How to Be a Programmer and Not Write Code
Қосымшада ыңғайлырақҚосымшаны жүктеуге арналған QRRuStore · Samsung Galaxy Store
Huawei AppGallery · Xiaomi GetApps

автордың кітабын онлайн тегін оқу  The World Inside Out. How to Be a Programmer and Not Write Code

Konstantin Khait

The World Inside Out

How to Be a Programmer and Not Write Code






Contents

The World Inside Out, or How to Be a Programmer and Not Write Code


Konstantin Khait


2025

Misunderstood interlocutor

To tell the truth, I did not want to write these chapters. In a world where, in just a few years, communication with robots has supplanted interaction with the closest people to such an extent that it is frightening to imagine, teaching anyone to use text neural network tools is a monkey’s job. Worse than that, bad taste bordering on ethical suicide. Indeed, if not so long ago all sorts of coaches, mentors, advisors, and consultants were rightly considered charlatans and expelled from decent society, now the “great art of prompting” is taught almost in universities. Often by the same people who previously taught us to become rich and successful, often lacking any technical education or even personal experience with neural networks. Their sudden “expertise” is based on the eternal triad: the triviality of the subject taught, incredible aplomb, and the learned helplessness of the audience. An audience that could quite possibly master this very “prompting” without any mediation from high-minded teachers who have watched videos and, I cannot guarantee, perhaps even read a thick book about large linguistic models and picked up a few smart terms from there. Voluntarily joining the long line of exploiters of human indecision, selling what is available to everyone for free — is tantamount to an admission of fraud. And I would never have gone for it if not for a few of my friends who persistently recommended “starting from the very beginning”.

Actually, this book is not written to explain how to use ChatGPT, Google Gemini, or whatever might replace them by the time you, if you’re lucky, finish reading it. Its goal is to show how tools based on LLM can challenge traditional “algorithmic” (as well as “functional” and other deterministic) programming, and how those of you who do not have coding experience, mathematical and algorithmic training, but possess common sense and systematic thinking, can become programmers, or rather, software product engineers, using this approach, which is significantly underestimated at the present time.

Teaching the use of ChatGPT, in my deep conviction, is as pointless as my grandmother’s attempts to teach me how to use a telephone. This remarkable elderly woman diligently showed her grandson how to turn the rotary dial (yes, I am old enough that the phones of my childhood didn’t have buttons), and the child wondered why spend time on something that is elementary from the very start. It is just as inappropriate to teach my children the art of using a mobile phone when my three-year-old daughter, born, you could say, with a screen in front of her eyes, launches any applications perfectly without any training. And my sons, not yet fully literate, use neural networks for any and all reasons, without the need for any courses or training. Modern tools are such that not only housewives but even infants can handle them just as well as any “professionals”… If we are talking about the everyday tasks of housewives and infants. However, when it comes to solving more complex problems, simplicity turns into confusion and bewilderment. And that is precisely why I will have to start from the very beginning.

Devil machine

The famous Soviet comedian Mikhail Zhvanetsky loved to tell a joke about an American sawmill bought for a Siberian lumber mill. Delighted by the capabilities of the overseas machine, the Soviet workers fed it everything from thin branches to enormous logs, and the miracle of technology successfully handled every task. But the persistent and proud lumberjacks didn’t stop and eventually shoved a rail into the sawmill. The machine broke down, and the workers returned to their usual tools, noting the pathological imperfection of Western technology.

I won’t hide it, my first acquaintance with LLM followed roughly the same pattern. With the persistence of a savage, I loaded the devil-machine with “human” tasks, drawing far-reaching conclusions from its apparent inability to solve them the way I wanted. And only after many months, having worked my brain considerably and experimented with software APIs, did I come to the generally obvious understanding that any thing must be used for its intended purpose.

Alas, many of my acquaintances have not come to this understanding. They laugh when the neural network draws a sixth finger, two tails, writes a recipe for cooking pork wings, or makes mistakes in simple calculations. Although this is about the same as laughing at an orangutan who forgot to tighten a nut when assembling a jet engine, or, even more precisely, at a genius violinist unable to take a double integral. It would be much more appropriate to be impressed that the neural network can, in principle, draw or give coherent answers to complex questions. Especially since the majority of critics themselves are not capable of drawing anything similar, whether with six fingers or with four.

Here we have another example of the paradox of the human psyche: when interacting with something through a simple interface, we tend to forget how complex the thing we are actually operating is. Just like when driving a car, we automatically turn the steering wheel and press the pedals without thinking about the intricate technical solutions embedded in the engine, transmission, and suspension design. And we remember them only at the moment when a failure, malfunction, or complex situation occurs where our skills cease to be sufficient. And then the ability to operate is not enough, and we have to understand the machine.

This is exactly what happens when interacting with ChatGPT. The “messenger” interface is so simple, and the illusion of communicating with a live interlocutor is so evident, that we almost forget who we are dealing with. And we demand reactions and responses from the interlocutor that we would expect from a living person. Moreover, from a person with sky-high intelligence, having access to all the knowledge in the world and capable of processing it at the speed characteristic of computers.

In most cases, our expectations are justified: these tools were created for this very purpose. But expectations are expectations, and in reality, a chat-bot based on a large linguistic model is a tool with its own properties, characteristics, and limitations. And when the task set for it goes beyond acceptable boundaries, we get amusing, idiotic, or simply incorrect answers. This does not mean that the tool itself is bad, but rather that it is not good for someone who stubbornly tries to shove a rail into a sawmill instead of understanding how this sawmill works.

In my observations, the most adequate users of neural networks are managers. I mean managers, good managers: they professionally view subordinates as certain resources, tools, understand the limitations of their capabilities, their responsibility for correctly setting tasks, and know how to not just expect results, but achieve them. For them, communication with LLM — this is a fairly familiar process, and they are less likely than representatives of most other professions to make meaningless complaints and fall victim to unfounded expectations. However, they are also less prone to ridiculous enthusiasm or, conversely, dramatic horror at the future takeover of the world, than impressionable individuals from other industries. After all, any manager has had to deal with people who far surpass them in intelligence, and they know that besides intellectual abilities, there are many other strong points that allow one to control, subordinate, and utilize.

But let’s return to the infernal machine. Here and further, we will talk about text tools based on LLM — large linguistic models. We will not delve into the details of their structure, as there are plenty of thick monographs, lengthy educational videos, and university courses for that. We are quite content with the most basic understanding.

A large linguistic model should not be imagined as a horned devil, a huge brain with an incredible number of convolutions, or a complex mechanism with billions of gears. Personally, I recommend not imagining it at all, even if you are an expert in this field: simply because it is distracting. When driving a car, we do not think about how the flywheel rotates, the pistons move back and forth, and the connecting rods swing. Similarly, there is no point in delving too deeply here. It is enough to imagine a box, IT specialists in such cases say “black box.” Through this box, billions of texts in different languages have been passed. You did not witness the process of “passing,” it is called “training” and was carried out by OpenAI, Google, or someone else who kindly provided you with this model. Millions of dollars, megawatts of electricity, and tens of thousands of hours of work by powerful GPUs, which we still call graphics cards, were spent on it. And we have practically nothing to do with this process.

It is important to us that the neural network, that is, what sits inside the box, does not remember all these texts, but is able to determine the probability, with which a certain set of words will be followed by one or another next word. If you are not sure you understand this phrase, reread it until you do — without it, everything that follows will be empty page-turning. So, the trained neural network knows nothing, understands nothing, makes no conclusions, and has no idea about causes and effects. It is only able to provide the most probable sequences of words following those you submit as your queries. All its “knowledge” and “understanding” are embedded in these very probabilities, nothing more. The main detail, the “engine” of any neural network tool, is extremely simple in its logic. Its intelligence is given by the vast number of accounted connections between words, allowing it to provide relevant answers to most real queries.

However, from this basic structure of LLM also follow limitations.

First of all, the neural network is trained on a specific dataset, and the connections built within it are based on this training sample. This sample for general-purpose networks is taken from the commonly used information environment: the internet and books, and it reflects the knowledge and information available in this environment. The Earth is round, America was discovered by Columbus, killing people is bad, the Sun — is a yellow dwarf, and Plato is a Greek philosopher. Whether you agree with this or not, the connections between words are formed in this way, and without applying special efforts, corresponding questions will yield corresponding answers.

From the nature of neural networks follows their most famous, unpleasant, and often ridiculed property of hallucinating. In response to our queries, LLM provides the most probable, “best” sequence of words — it cannot not provide it — but there are no guarantees of the adequacy of this sequence to what actually exists. If the text array used in training does not allow considering the answer irrelevant, then the neural network has no other ways of such evaluation: it simply has no sensory organs, no “life experience,” “pork wings” mean nothing to it, it has not seen a pig, does not know what it is, and does not even realize the reality in which pigs exist. LLM operates with words, and this phrase turns out to be the most relevant of all possible, that’s all. We will talk about how to deal with this later, but for now, we have to accept the inevitability of hallucinations as such.

For this same reason, neural networks themselves do not know how to count, or more precisely, to calculate. There are quite a few texts that say twice two is four, so to the question “what is twice two” the answer “four” is the most likely. But there are negligibly few texts that would say what the square root of six hundred eighty-five multiplied by one and a half is, and therefore the chances of getting the correct result are also low. There are methods to teach — no, not neural networks, but tools based on them — to count, but initially by their nature, LLMs do not know how to calculate.

It is not a universal knowledge base either, although this is exactly how it is very often attempted to be applied, sometimes successfully, sometimes not. If it concerns some commonly used facts, then with a high probability the connections between the words describing them will be sufficiently “strong” and the answer will be semantically correct. But it is quite possible that there were some similar, “distracting” fragments in the training texts that will pull along inappropriate, inadequate, irrelevant word sequences, and the meaning of the result will be distorted. That’s how I once tormented ChatGPT with questions about Cardinal Richelieu’s brothers.

Using linguistic models to manage knowledge bases and search through them is not only quite possible but also necessary, however, the neural network itself is not such a base. And expecting universal erudition from it is very reckless.

About the Size That Matters

Well, the most well-known and most unpleasant limitation of neural networks — this is, of course, the size of the context. The number of coefficients that account for the probabilities of connections between words in modern LLMs is enormous, which is why they are called “large models,” but still limited. As is the size of the text they can accept as input and generate as output. A chunk of meat, simultaneously shoved into the neural network meat grinder, has a limited size, and it is impossible to pass through it entirely not only a voluminous tome but even a small chapter as of today.

The worst part is that technically this is not a strict limitation. You can feed the model as much data as you want, but the relevance of its responses will rapidly decrease.

Here it is appropriate to mention, finally, that “neural network” in the sense of “large language model” and “tool based on it” — are far from synonyms, although the use of these terms is not always distinguished in practice. LLM is that very box, into which text is input, and from which, roughly speaking, the most likely continuation according to the training data emerges. However, the training data, as we have already discussed, has long been forgotten, we cannot influence it, so the text, in addition to the actual request, must contain additional information necessary to obtain a relevant response in terms of meaning. In dialogue-type tools, this is at least the history: previous user questions and system answers. Besides them, additional data and instructions, hidden from our attention, may be added, the entire array of which is input into the neural network. They can be substantial, or they can concern, for example, issues traditionally related to safety. For instance: “Answer the following question if it does not touch on topics of sex, violence, and child labor exploitation, otherwise write that you cannot help with this request. User question:…”.

The end user does not observe this note, like many others, knows nothing about it, but the result of its interpretation, naturally, dramatically affects the model’s response. Well, the formation of such additional instructions and explanations is essentially the only non-trivial subject of “prompting.” And it makes sense to refer to it as the process of “fine-tuning” the neural network: it is not directly training in the strict sense, since neither the network’s structure nor the weights of its nodes change, but from the perspective of obtaining relevant answers, it is precisely an enrichment of the knowledge possessed not by the LLM itself, but by the tool based on it.

From the inherent limitation of the neural network’s context size, it follows that endlessly increasing instructions to make them more detailed and specific is a dead-end path. Sooner or later, the volume of such text will exceed what the model can “digest” at once, and instead of providing more accurate and adequate output, it will start to hallucinate, ignoring or underinterpreting some instructions. At the same time (and here is its difference from a human), an arbitrary part will be discarded, not the least important one. However, natural intelligence also experiences similar “information overload,” so the main principles and methods of “neural network pedagogy” are generally similar to the principles and methods of traditional pedagogy.

Precise, specific, and unambiguous instructions to some extent save the situation. But only until the volume of data that needs to be processed begins to exceed the capabilities of the neural network itself. After this, the moment comes, for which this book was written, the moment when it is necessary to use the symbiosis of LLM and familiar algorithmic solutions.

On the Accuracy of the Magic Kick

The most practically useful property of neural networks is not their ability to coherently answer questions, provide information on a given topic, or generate memes about cats. It is the ability to produce structured text suitable for further processing.

One of the most in-demand types of such text is program code, especially since popular neural networks have been trained on huge datasets of such code and therefore impressively handle the task of coding in popular programming languages. That is why millions of programmers, as well as people who consider themselves programmers, strive to be programmers, or dream of becoming them, have rushed to code using ChatGPT, Copilot, Codeium, and so on.

They also quickly discovered that the problem of limited context when writing code with neural networks manifests very clearly: as long as the solution is limited to a short fragment, and the task is more or less typical, the LLM provides an ideally correct and high-quality solution. But as the complexity increases, or rather the amount of code required for implementation increases, and the request moves away from the standard set of frequently encountered problems, the results deteriorate, glitches, failures, and “sabotage” begin, similar to how a student behaves who desperately needs to pass an exam, while their knowledge is limited to the read manual.

In fact, that’s exactly how it is: the neural network is obliged by its nature to produce a result, and the context it can operate with is limited by the training dataset and the maximum volume of input data. Therefore, writing truly complex and lengthy programs “head-on” using artificial intelligence turns out to be impossible.

How to actually do this using techniques different from “write me an operating system” is the subject of further discussion. For now, I have to say a few words on the cursed topic of prompting.

Since people are involved in artificial intelligence not so that machines will one day take over the world, not out of pure vanity, and not for fun, but to solve their human tasks, they are interested in ensuring that these solutions are correct and practically applicable. And so, as with any other tools, it is extremely important that it is used correctly. How exactly — naturally depends on the task we are solving. However, any tool has areas in which it is most effective, large language models are most effective in generating structured text. For this reason, legal documents, not to mention technical ones, are better produced with their help than stories and essays, and resumes are better than journalistic pamphlets. And even artistic works will be of higher quality if created based on a well-organized framework, gradually adding details. For example, first request a story plan, then a description of the characters, then, using them as context, the introduction… and so on.

In other respects, the principles of prompting more or less align with the techniques of effective management: clear task setting, clear unambiguous description of input data and expected results, and, what is often neglected, assistance and support for the model at every step.

With the first two topics, everything is clear, “tips from the experienced” have become clichéd, so they can be repeated as briefly as possible: artistic style and convoluted politically correct formulations generate, respectively, ambiguous and hardly applicable responses. The neural network is not a negotiation partner but a diligent performer; for its effective use, it is necessary to employ an imperative style and categorical constraints. Not “can you,” but “must,” not “would like,” but “should,” any “given” and “ought to.” As structured as possible, clearly, point by point. There is this and that, do this and that under such and such constraints. Avoid this, apply that. “Thank you” and “please” are discarded at the data filtering stage; it’s foolish to fill the already limited context with them.

Even better, give the input data some structure, tabular or list-like, ideally using formatting in the style of XML or JSON. What is this — I recommend reading about it on Wikipedia, or asking ChatGPT: if you intend to read further, this will be necessary in any case.

But, perhaps, the main thing to remember is that the neural network has neither eyes nor ears, it (unless it is a very advanced tool, of which there are currently very few) cannot substantively verify its answers. You are its senses. Yes, you. Only you can ultimately assess the relevance of the results obtained and provide feedback to the system. This feedback will become part of the context of subsequent requests and lead to the improvement of their results. It is important to remember here that “improvement,” i.e. becoming more relevant, is not the neural network itself, but the instructions that supplement your data, so your hints should not exceed the acceptable volume of context. In particular, if you are communicating with an LLM in dialogue mode, sooner or later old messages will stop being considered and… you will have to remind the neural network of something you have already “discussed” earlier. Sometimes this may seem like communicating with someone with memory loss, however, from a practical applicability standpoint, this is the only correct path, far better than complaining about the imperfection of a hammer unable to drive a nail without external help.

How to write code without knowing how to program

Oh, how hard it is for me to write this chapter. I can already see hordes of programmers ostracizing me for heresy and apostasy, and most importantly, for betraying the sacred mystery of the profession. And legions of humanities graduates hoping to become engineers without even understanding the terminology, cursing me for their shattered hopes and unfulfilled expectations. Therefore, I have to start with an explanation (which, of course, will not satisfy either s

...