[ $davids.sh ] โ€” david shekunts blog

๐Ÿ˜ฑ When AI Takes Your Job, Clothes, Motorcycle, and Wife ๐Ÿ˜ฑ

# [ $davids.sh ] ยท message #255

๐Ÿ˜ฑ When AI Takes Your Job, Clothes, Motorcycle, and Wife ๐Ÿ˜ฑ

No need to worry if you're not a loser (continued in the comments)

#ai #hypeshit

  • @ [ $davids.sh ] ยท # 1582

    At some point, I also started to worry: "Will AI take away the ability to come up with different algorithms / approaches / architectures and other custom solutions myself, and will it just write everything perfectly for me?" โ€“ after all, this is the only thing that truly brings pleasure in development.

    But my worries dissipated almost instantly when I asked it to come up with an application architecture, and it suggested RabbitMQ... and even when I tried to get it to criticize this technology, I didn't see a single adequate argument against it... and old-timers of the channel know my endless battle with this furry piece of shit (1, 2).

    And why do AIs like RMQ so much?

    The entire internet is flooded with the idea that RMQ, OOP, MongoDB, Oracle, Metabase, Angular, etc. are awesome, and if you're a Russian developer, write in PHP, if you're American, write in Ruby -> AIs absorb this garbage, digest it, and output a "weighted average" without any independent validation.

    Even if it does encounter criticism of a technology, there's always less of it than laudatory material because:

    (1) authors / companies market their technologies only in a positive light, tailoring all tests and benchmarks to themselves.

    (2) competitors / opponents write about problems in a very neutral tone (unless it's comments on Habr and VC).

    (3) people very often praise technologies or shower them with likes without ever having tried them in production (look at the tens of thousands of stars on JS projects on GitHub, which can't even run or crash after the second fart).

    Therefore, even if a technology is bad, it's enough for it to be popular for LLMs to start recommending it.

    To understand that a technology is of poor quality, AIs would need to go through the developer's journey: write use cases and benchmarks, run them in different configurations, compare them with other similar technologies, save the results, and base their recommendations on them.

    If anyone does train an AI like that, it will be another set of private posts from a company / open-sourcers, which still won't be able to overcome the hype machine around shitty technologies.

    And the second problem with LLMs: when asked to formulate a slightly unusual PostgreSQL query, it still hallucinates like a salt-addled St. Petersburg resident. Why? Because PG queries are not a "weighted average"; they are a function of the schema and SQL constructs: f(schema) = sql + schema.

    "So, AI won't replace programmers?" โ€“ not entirely...

    Yes, AI is bad (and will remain so) as a tool "replacing a programmer" because it has a lot of problems understanding constant concepts and how suitable one technology or another is. But what if it doesn't have to use existing technologies, but instead creates its own, understandable to itself?

    Imagine you enter a prompt: "Create a chat system database with tech support as an embedded widget with an admin panel and operator dashboards" โ€“ and the neural network generates a DB schema, backend, and frontend code, but not in the databases and programming languages we know, but in its own, which are absolutely inconvenient for human use because they are specifically created for AI to use itself.

    The "code" will be a set of neurons that decide how to combine certain information, call the necessary OS instructions, what and how to run on a server cluster, and so on.

    That is, I assume that AI will have its own prompt languages, its own Kubernetes, databases, caches, message queues, which will operate according to their own laws.

    These technologies will occupy their own market: landing pages, CRMs, ERPs, LMSs, and similar "template-based" projects, but tasks that are solved for the first time or in a new way will still be written using the technologies we know now and their descendants.

    What do you think, does it sound plausible?

  • @ Katya Yusupova 122 ยท # 1583

    Damn, I need to find a wife so that AI can select her.

  • @ Grigoriy Junior PO Zheleznjak ๐Ÿš‚ ยท # 1584

    Has AI already learned to annoy devs constantly with messages like "Colleagues, any updates on the tasks?"? My friend is just asking.

  • @ Lera Fedotikova ยท # 1585

    Wouldn't want AI to take your wife... ๐Ÿคฆโ€โ™‚๏ธ

  • @ [ $davids.sh ] ยท # 1586

    This, by the way, gave me the idea that we should expect AI applications from specific companies/communities that we can more or less trust and consult with, because they will have validated context.

    So if you come across any, share them.

  • @ YURII VLADIMIROVICH ยท # 1587

    Don't relax too soon; if it can't be done today, "it can be done tomorrow" ๐Ÿ˜‰

    Microsoft is actively working on specialized neural networks right now.

    And nothing prevents them from feeding their knowledge base on architecture (and they do understand something about it, considering they managed to create Azure with their services, etc.) to their neural network, and then their architect neural network will start replacing you.

    So, it's better to prepare in advance and think about how and where to pivot if the worst-case scenario happens.

  • @ [ $davids.sh ] ยท # 1589

    Yes, I think I'll write about the worst-case scenario in future posts)

    We'll see, for now, things like ChatGPT and Copilot either hallucinate, lose context, or require a lot of editing.

    Even when I fed GPT various documentation, it still got very lost.

    And by the way, about the knowledge base โ€“ the documentation for Azure, AWS, or Google Cloud is such fragmented garbage (often outdated) that you very often have to write to forums or support to get an answer.

    That is, to write real software, a long cycle of trial and error is needed; AI must write, check, write tests, change, check, and so on, many, many times.

    Consequently, until they learn to try something on their own and correct it, I don't think there's anything to worry about.

  • @ [ $davids.sh ] ยท # 1590

    Here are my thoughts on potential prerequisites:

    โ€“ A neural network is released that studies your architecture (e.g., directly from AWS) and writes documentation for it, plus updates it during deployments.

    โ€“ Another neural network that can look at errors you've encountered, debug everything, and tell you where the problem is (or better yet, fix it).

    โ€“ Agents become so good that you can ask them not only to write code but also to integrate it, run it, test it, and deploy it.

    If this happens, then that's it, game over. We'll have to urgently switch to OnlyFans.

  • @ Dmitrij IT-Kachalka Malakhov ยท # 1596

    and there robots have already taken everything and are posting naked asses of non-existent people

  • @ Nikita ยท # 1597

    Will there be any downsides? ๐Ÿ˜‚