<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Arkinfo Notes]]></title><description><![CDATA[Quantum worlds, deep science, contemplative philosophy, and untamed creativity. Arkinfo Notes is for minds whose curiosity knows no bounds.]]></description><link>https://notes.arkinfo.xyz</link><generator>Substack</generator><lastBuildDate>Mon, 06 Apr 2026 03:23:49 GMT</lastBuildDate><atom:link href="https://notes.arkinfo.xyz/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Mir H. S. Quadri]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[mirhsquadri@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[mirhsquadri@substack.com]]></itunes:email><itunes:name><![CDATA[Mir H. S. Quadri]]></itunes:name></itunes:owner><itunes:author><![CDATA[Mir H. S. Quadri]]></itunes:author><googleplay:owner><![CDATA[mirhsquadri@substack.com]]></googleplay:owner><googleplay:email><![CDATA[mirhsquadri@substack.com]]></googleplay:email><googleplay:author><![CDATA[Mir H. S. Quadri]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Notes on Stability in Non Linear Systems]]></title><description><![CDATA[Short Notes on Stability in Non Linear Systems]]></description><link>https://notes.arkinfo.xyz/p/notes-on-stability-in-non-linear</link><guid isPermaLink="false">https://notes.arkinfo.xyz/p/notes-on-stability-in-non-linear</guid><dc:creator><![CDATA[Mir H. S. Quadri]]></dc:creator><pubDate>Mon, 17 Mar 2025 22:26:29 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1f3bb1b5-76c8-4d03-96eb-99becbb16eef_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Fellow Nerds,</p><p>Hope you are all doing well. Busy days here at the hive! I have been working on some really exciting projects that I am looking forward to share with you all in the coming days. In the meantime though, I had some quick thoughts to share about the nuanced concept of <em>stability</em> in nonlinear systems. </p><p>Initially, I had thought of writing a detailed article on this. But this morning, I thought of trying a different format. I made an audio recording of my thoughts on the matter and shared it on <a href="https://www.youtube.com/@mirhsquadri">YouTube. </a></p><div id="youtube2-JefUS0zaIPE" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;JefUS0zaIPE&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/JefUS0zaIPE?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Its a bit unpolished and rough around the edges, sorry for that! But I didn&#8217;t think too much into it. Just wanted to get this out there choosing a path of least resistance. </p><p>Let me know what you guys think of it. Should I venture more into video/podcast type of content or stick to just written format? I would love to know your thoughts on this. Also, please do remember to <a href="https://www.youtube.com/@mirhsquadri">subscribe to the channel.</a> Thanks in advance!</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arkinfo Notes! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Do We ‘Create’ Complex Systems?]]></title><description><![CDATA[What a Coffee Shop, Rabbit Island, and LLMs Tell Us About Complex Systems and Our Responsibility in Building Safe AI]]></description><link>https://notes.arkinfo.xyz/p/do-we-create-complex-systems</link><guid isPermaLink="false">https://notes.arkinfo.xyz/p/do-we-create-complex-systems</guid><dc:creator><![CDATA[Mir H. S. Quadri]]></dc:creator><pubDate>Mon, 10 Feb 2025 14:31:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9COi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d330faf-0f5f-49ea-8e61-e9029f8a29e7_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Fellow Nerds,</p><p>I was at a coffee shop the other day, procrastinating on my writing by watching a video on overcoming procrastination. People were moving in and out like clockwork, some lingering over their laptops, others deep in conversation, and a few having what looked like serious meetings, all fuelled by overpriced caffeine and free Wi-Fi. No one seemed out of place, yet nothing about it felt orchestrated. The baristas served coffee and wiped down tables, but they weren&#8217;t controlling the flow of interactions. The store provided the space and resources, but the <em>life</em> of the place came from the people inside, doing their own thing, oblivious to any bigger picture.</p><p>And that&#8217;s when I started thinking about <em>rabbits (</em>like any normal person would). Specifically, the rabbits of <em>&#332;kunoshima,</em> better known as <em>Rabbit Island</em> in Japan. It&#8217;s a small island near Hiroshima, once used for manufacturing chemical weapons during World War II, now overrun by hundreds of free roaming rabbits. Tourists flock there to feed them, take photos, and take in all the fluffy goodness of these creatures.</p><p>The funny thing is, while humans brought the rabbits there in the first place and love interacting with them, the rabbits don&#8217;t really seem to <em>need</em> us. They&#8217;ve established their own little ecosystem, their own patterns of behavior. We might feed them snacks, but they&#8217;re not waiting on us to survive. They&#8217;ve got their own thing going.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9COi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d330faf-0f5f-49ea-8e61-e9029f8a29e7_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9COi!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d330faf-0f5f-49ea-8e61-e9029f8a29e7_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!9COi!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d330faf-0f5f-49ea-8e61-e9029f8a29e7_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!9COi!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d330faf-0f5f-49ea-8e61-e9029f8a29e7_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!9COi!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d330faf-0f5f-49ea-8e61-e9029f8a29e7_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9COi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d330faf-0f5f-49ea-8e61-e9029f8a29e7_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4d330faf-0f5f-49ea-8e61-e9029f8a29e7_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9COi!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d330faf-0f5f-49ea-8e61-e9029f8a29e7_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!9COi!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d330faf-0f5f-49ea-8e61-e9029f8a29e7_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!9COi!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d330faf-0f5f-49ea-8e61-e9029f8a29e7_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!9COi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d330faf-0f5f-49ea-8e61-e9029f8a29e7_1000x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Artwork - Rabbits at a Coffee Shop</em></figcaption></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/subscribe?"><span>Subscribe now</span></a></p><p>Sitting there at that coffee shop, it struck me how these two seemingly unrelated scenes, i.e., people in a coffee shop and rabbits on an island, were pretty much telling the same story. <strong>They both represent systems that seem to run themselves, full of interactions and dynamics we didn&#8217;t explicitly design or control.</strong> We might have set the stage, but the complexity emerged on its own.</p><p>And that led me to a question. <strong>Can we really create complex systems? Or are we just good at facilitating the conditions for them to arise naturally? </strong>Because those are two very different things. If we are not creators of such systems, then that would imply that we are more like gardeners, nudging things along and watching what grows. There&#8217;s an AI related angle to it as well that I will be introducing through the progress of this article. So if I have piqued your interest, please stick around!</p><h1>So, What Are We Talking About?</h1><p><em>What is a complex system?</em> Let&#8217;s start here. A complex system can be thought of as a collection of parts that interact in such a way that the system as a whole behaves differently than the sum of its parts would suggest. In other words, it&#8217;s not additive, it&#8217;s <em>emergent.</em> The interactions between the parts create patterns, behaviors, and properties that you wouldn&#8217;t see if you just looked at each part in isolation.</p><p>I personally liked <em>Yi Lin&#8217;s</em> definition of a complex system from the textbook Systems Science (2012):</p><p><em>&#8220;A system is complex if it is composed of many parts (subsystems) that are complicatedly interwoven and connected, and the behavior, functionalities, and characteristics of the whole cannot be directly acquired from those of the parts.&#8221;</em></p><p>There is no better (or rather, more obvious) example of a complex system, then your brain that is reading this article right now. At the cellular level, your brain is just a mass of neurons, about 86 billion of them. Each neuron is a relatively simple unit. It receives signals, processes them, and sends signals on to other neurons. There&#8217;s no single neuron that <em>contains</em> your thoughts or your sense of self. But when you connect billions of these neurons together, you gain <em>consciousness.</em> Thoughts, emotions, memories, creativity, all of these things <em>emerge</em> from the interactions between neurons. It&#8217;s in the <em>connections,</em> the relationships between the neurons, where the magic happens.</p><p>This phenomenon where simple parts give rise to unexpected complexity is what we call <em>emergence.</em> I have spoken at length about emergence in many of my previous articles, both at <a href="https://notes.arkinfo.xyz/">Arkinfo Notes</a> and <a href="https://notebook.lumeni.xyz/">The Lumeni Notebook</a>. But the most detailed analysis of emergence can be found in the famous <a href="https://philpapers.org/archive/QUACEM.pdf">CEMLA paper</a> that I wrote last year.</p><p>Emergence is what allows birds to flock, fish to school, and cities to function. No single bird is leading the flock, no one fish is coordinating the school, and no mayor or city planner could ever fully predict the chaotic, vibrant life of a city. Yet, all of these systems display behaviors that are organised, adaptive, and, in many cases, absolutely beautiful.</p><p>Take an ant colony, for example. Each individual ant follows simple rules, i.e., if it finds food, it leaves a pheromone trail. If it loses the trail, it searches randomly until it finds one again. None of the ants have any idea what the colony as a whole looks like. There&#8217;s no master <em>ant architect.</em> But from these simple interactions, the colony builds intricate tunnels, optimises food collection routes, and even allocates tasks based on the colony&#8217;s needs. The complexity arises from the interactions, not from some grand, top-down design.</p><p>Or consider traffic patterns in a city. Each driver is just trying to get from point A to point B, but their individual decisions, when to accelerate, when to brake, when to change lanes, combine to create larger patterns, i.e., traffic jams, rush hour congestion, or even those odd moments when everything just flows perfectly. No single driver is responsible for a traffic jam, but the jam is a product of everyone&#8217;s collective behavior.</p><p>This brings me back to what I saw at the coffee shop. The baristas weren&#8217;t orchestrating the flow of conversations, the tapping of keyboards, or the spontaneous meetings happening at different tables. The store simply provided a space, a set of conditions, i.e., coffee, seating, and Wi-Fi. People filled that space with their own interactions.</p><p>We see this same <a href="https://notes.arkinfo.xyz/p/why-complex-emergent-intelligence">emergent behavior in </a><em><a href="https://notes.arkinfo.xyz/p/why-complex-emergent-intelligence">artificial systems</a></em> like large language models. These models <a href="https://notes.arkinfo.xyz/p/no-ai-is-not-a-mind">don&#8217;t </a><em><a href="https://notes.arkinfo.xyz/p/no-ai-is-not-a-mind">understand</a></em><a href="https://notes.arkinfo.xyz/p/no-ai-is-not-a-mind"> language</a> in the way humans do. They process enormous amounts of text, learn patterns, and generate responses based on statistical probabilities. But from those simple rules and patterns, they produce outputs that can feel surprisingly coherent, sometimes even creative. The complexity of the model&#8217;s behavior, the way it can answer questions, tell stories, or mimic human conversation, isn&#8217;t something explicitly programmed. It <em>emerges </em>from the system&#8217;s architecture and the data it&#8217;s trained on.</p><p>And this is where the line between <em>creating</em> and <em>facilitating</em> starts to blur. We can build the structures, set the conditions, and input the data, but the complexity that arises often feels like it has a mind of its own.</p><p>When people introduced rabbits to the island of &#332;kunoshima, they weren&#8217;t designing an ecosystem. They weren&#8217;t thinking about how rabbit populations would grow, how they&#8217;d interact with the environment, or how tourists would one day flock to see them. But that&#8217;s exactly what happened. The rabbits didn&#8217;t need a master plan. They adapted, formed social hierarchies, and found ways to thrive. The island became its own little world, shaped by interactions far beyond anyone&#8217;s control.</p><h1>Coffee Shops, Rabbits, &amp; LLMs</h1><p>I hope I have established my point with regards to the commonality between a coffee shop, a rabbit-infested island in Japan, and large language models (LLMs) like GPT. I know that it's a connection that may seem far fetched at first glance. One is a coffee shop chain, the second is an ecological oddity, and the third is a piece of cutting-edge artificial intelligence. But the more I think about it, the more I realise they all share something fundamental, i.e., <em>they&#8217;re systems we&#8217;ve helped set in motion, but we don&#8217;t (and maybe can&#8217;t) fully control.</em></p><p>LLMs are actually built using a relatively straightforward architecture. At their core, they&#8217;re statistical models trained on vast amounts of text data. That&#8217;s it. They learn patterns, how words tend to follow one another, how sentences are structured, how meaning is conveyed. But no one sat down and programmed these models to speak the way humans do. No one told them how to write poetry, explain complex ideas, or mimic human conversation. Instead, we fed them data. We fine-tuned their parameters. We added layers of reinforcement learning to guide them in certain directions. But the actual outputs, i.e., the surprising, creative, sometimes downright uncanny things these models produce aren&#8217;t explicitly designed. They are <em>emergent</em>. The complexity arises from the interactions between the model&#8217;s architecture and the data it&#8217;s trained on, much like the social buzz at a coffee shop or the rabbit ecosystem on &#332;kunoshima.</p><h1>So, Creators or Gardeners?</h1><p>When we think of <em>creation,</em> we tend to imagine a top-down process, something like an architect drafting blueprints from scratch. There&#8217;s a sense of <em>control</em> baked into the idea of <em>creation.</em> It suggests intentionality, direction, and a finished product that aligns with a preconceived plan.</p><p>But when I look at the functioning of complex systems, that narrative just doesn&#8217;t add up. Because complexity doesn&#8217;t arise from rigid design, it thrives in spaces where unpredictability, interaction, and feedback loops are allowed to flourish.</p><p>This is the reason why I like <em>gardening</em> as a metaphor. A gardener doesn&#8217;t create a plant in the same way an artist creates a painting. They can choose what seeds to plant, where to plant them, and how to nurture them with sunlight, water, and nutrients. But the actual growth of the plant? That&#8217;s not something the gardener <em>designs.</em> It emerges from the interactions between the plant&#8217;s genetics, the environment, and countless other factors beyond the gardener&#8217;s control. And that&#8217;s exactly how complex systems work.</p><p>When we introduce rabbits to an island, we&#8217;re planting a seed. But the <em>ecosystem</em> that develops, i.e., the way the rabbits interact with the environment, with each other, and with the humans who come to feed them, is something we didn&#8217;t design. We might have influenced it, nudged it in certain directions, but we didn&#8217;t <em>create</em> it in the way we like to imagine.</p><p>The same is true for a coffee shop and the same is true for an LLM. Engineers and researchers designed the model&#8217;s architecture and trained it on vast datasets, but the emergent behaviors, the surprising, often unpredictable outputs, aren&#8217;t things they explicitly programmed. The model <em>learns</em> in a way that mirrors how complex systems in nature evolve, i.e., through repeated interactions, feedback, and adaptation. The result is something that feels less like a product and more like a phenomenon.</p><p>So, if we&#8217;re not the creators in the traditional sense, what <em>are</em> we? I think we&#8217;re <em>facilitators of emergence.</em> We create environments where complexity can flourish, but we don&#8217;t control the outcomes. We&#8217;re more like gardeners than architects, setting the conditions, tending to the system, but ultimately standing back to watch as life, in all its unpredictability, unfolds.</p><h1>Our Profound Responsibility in Facilitating Safe AI</h1><p>The reason why I thought of writing this article, is because I wanted to underline the profound implications of this, especially when it comes to <em>responsibly building AI.</em> <strong>If we acknowledge that we&#8217;re not fully in control of the systems we set in motion, what does that mean for how we manage them?</strong> Just because we didn&#8217;t explicitly design every output doesn&#8217;t mean we&#8217;re off the hook for the biases and unintended consequences that <em>emerge.</em> Facilitating complexity doesn&#8217;t absolve us of responsibility. It demands a more thoughtful, <em>adaptive approach</em> to managing these systems. This realisation, of being a <em>gardener</em> of AI rather than a <em>creator,</em> has woken me up to the profound importance of AI regulation and ethics.</p><p>The narrative often presented by big tech is one of complete control and mastery. They frame AI systems as tools, copilots, precisely engineered, fully predictable, and entirely within human command. <strong>This portrayal is an absolute facade,</strong> a comforting illusion designed to ease public apprehension and sidestep deeper scrutiny.</p><p>As with the rabbits of &#332;kunoshima and the social dynamics of a coffee shop, the behaviors that arise within these systems often exceed the intentions and foresight of their creators. AI, particularly large language models, operates under similar principles. While engineers and researchers design the architecture and set the initial parameters, the true nature of these models emerges from interactions within vast datasets and the feedback loops created by real-world use. This complexity is not something that can be neatly boxed or fully anticipated.</p><p>I know you must be thinking Terminator and Jurassic Park at this point, but the real threat is more insidious and mundane. It's the subtle, pervasive ways in which unregulated AI systems can amplify biases, spread misinformation, and entrench existing societal inequalities. Not to forget that pretty much every big tech company now, has removed its policies regarding the prohibition of AI in building weapons and being used in wars.</p><p><strong>We need to acknowledge the inherent unpredictability of complex systems and accept our responsibility as facilitators, whilst breaking free of the lie of being its creators and controllers. </strong>Just as we would steward an ecosystem or nurture a community, we must approach AI with a mindset of ethical care and proactive governance. This means transparent algorithms, accountable data practices, and a commitment to continuous evaluation and adjustment.</p><p>The stakes are too high to rely on the comforting myths spun by those who stand to profit most. <strong>Big Tech is lying to you.</strong> Plain and simple. <strong>They did not create AI and they cannot control what they have facilitated. </strong>We must demand transparency and accountability before it's too late.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/17qLefOAV-FpmK-shLcdA27BzHuByqS3W/view?usp=sharing&quot;,&quot;text&quot;:&quot;Download Article as PDF&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/17qLefOAV-FpmK-shLcdA27BzHuByqS3W/view?usp=sharing"><span>Download Article as PDF</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arkinfo Notes! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/do-we-create-complex-systems/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/do-we-create-complex-systems/comments"><span>Leave a comment</span></a></p><h2></h2>]]></content:encoded></item><item><title><![CDATA[AI is Great at Coding and That’s a Problem]]></title><description><![CDATA[We are at an Imminent Risk of Losing Control Entirely Over the Systems that Run the World]]></description><link>https://notes.arkinfo.xyz/p/ai-is-great-at-coding-and-thats-a</link><guid isPermaLink="false">https://notes.arkinfo.xyz/p/ai-is-great-at-coding-and-thats-a</guid><dc:creator><![CDATA[Mir H. S. Quadri]]></dc:creator><pubDate>Mon, 03 Feb 2025 18:51:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!_2hl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd462fe85-287f-4380-b554-6ca35f470ff6_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Fellow Nerds,</p><p>I hope this newsletter reaches you well. For those of you wondering, yes, I am alive. Yes, I have NOT been posting for a while. That was purely for personal reasons. But also, I don&#8217;t want to unnecessarily grace your inbox every week even if I have nothing truly valuable to share. That&#8217;s not how I envisioned Arkinfo Notes or The Lumeni Notebook to be. I write here, to provide value. I want to bring to your attention, things or concepts, deserving of your attention, that you may not be aware of amidst the ruckus of &#8216;tech news&#8217;.</p><p>Today, I felt like I had something truly valuable to share. It's a question. It's a concern. It's a silent ongoing crisis that could cost us deeply. And I think this deserves your attention.</p><p>As you may be aware, OpenAI just launched its latest LLM upgrade, <em>o3 mini</em> and <em>o3 mini-high</em> to its &#8216;plus package&#8217; users. I have been playing around with it, and I have to admit, it's pretty amazing. This is especially true for its coding abilities. My newsfeed on substack has been flooded with people sharing videos of awesome visual games that they created using o3 mini and python, with just one or two simple prompts. The results are really good and o3 certainly deserves applause.</p><p>But this has also brought to surface a question that I have been wrestling with for a while now. I have tried to ignore it or play it down so far, simply because I didn&#8217;t come across an AI model that alarmed me enough to start talking about this. The o3 model has broken that bench mark too. So here&#8217;s the question:</p><p><strong>Why does AI need to use Python? Or C++. Or any human programming language, for that matter?</strong></p><p>You may think the question is a bit stupid. Granted, it does sound a bit rudimentary. I am sure you can already think of a few reasons <em>why,</em> off the top of your head. But allow me two minutes of your reading time to convince you as to why this question is actually pretty serious.</p><p>With o3, we finally have AI systems that can potentially produce industry-grade software in ways that would have seemed impossible just a few years ago. We&#8217;ve crossed a threshold where AI isn&#8217;t just assisting programmers anymore. AI CAN REPLACE some programmers. It writes faster, refactors better, and finds edge cases that humans overlook. It generates architectures, optimises memory, and even patches its own vulnerabilities.</p><p>But so far, it all happens within the constraints of human-readable languages. Languages that exist NOT because they are optimal for computation, but because they are optimal for HUMANS.</p><p>When AI starts optimising not just code, but the very paradigm of computing itself, we will cross into something entirely different. We are on the precipice of a world where software is no longer created for human interpretation at all. A world where AI writes, maintains, and executes code in ways we no longer understand.</p><p>The seeds of this transition are already visible. AI is becoming a meta-programmer. SEs are already complaining about debugging AI generated code that they are having difficulty in understanding. This is occurring largely because AI is not just coding, but also defining the paradigms in which its code approaches in solving a problem.</p><p>Just so I am clear, this article is NOT about speculative AGI scenarios. I am not trying to go into the <em>&#8216;What if AI becomes conscious&#8217;</em> rabbit-hole. I have written plenty on that already, so you can check my previous articles on that.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_2hl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd462fe85-287f-4380-b554-6ca35f470ff6_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_2hl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd462fe85-287f-4380-b554-6ca35f470ff6_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!_2hl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd462fe85-287f-4380-b554-6ca35f470ff6_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!_2hl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd462fe85-287f-4380-b554-6ca35f470ff6_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!_2hl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd462fe85-287f-4380-b554-6ca35f470ff6_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_2hl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd462fe85-287f-4380-b554-6ca35f470ff6_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d462fe85-287f-4380-b554-6ca35f470ff6_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_2hl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd462fe85-287f-4380-b554-6ca35f470ff6_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!_2hl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd462fe85-287f-4380-b554-6ca35f470ff6_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!_2hl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd462fe85-287f-4380-b554-6ca35f470ff6_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!_2hl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd462fe85-287f-4380-b554-6ca35f470ff6_1000x1000.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Artwork - AI Technical Debt</em></figcaption></figure></div><p>The scenarios that I am playing through this article are very much plausible, and, in fact, in the making, as we speak. They don&#8217;t require AGI or conscious AI in order to happen. This article is about the silent shift toward <strong>AI-controlled computation,</strong> where the very foundations of software evolve beyond human reach. We are building tools that will, inevitably, outgrow the need for human input.<strong> </strong>The question is not IF AI will abandon our programming languages, but WHEN.</p><p>If I have your attention, then please read on and let me know what you think. Also, please share it with your circles. This is something that we all need to be talking about.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/1aMFqyKIQcHSfKLXCMAfH4CiiEn0zExNi/view?usp=sharing&quot;,&quot;text&quot;:&quot;Download Article as PDF&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/1aMFqyKIQcHSfKLXCMAfH4CiiEn0zExNi/view?usp=sharing"><span>Download Article as PDF</span></a></p><h1>When the Black Box Literally Goes Dark</h1><p>So far, code is something that humans can read, modify, and control. That&#8217;s the entire reason high-level programming languages exist. We create abstractions, first through assembly, then through languages like C, then through managed environments like Python, all to ensure that the systems we build remain legible and maintainable.</p><p>But AI doesn&#8217;t need those constraints. AI doesn&#8217;t care if a function is readable, meaningful, or if a codebase adheres to human conventions. It cares about output. Efficiency. Execution. The more we offload programming to AI, the further we move from codebases designed for human interpretation.</p><p>This is an ontological shift in how software is structured. In traditional programming, complexity is layered, but the logic is still accessible. Even the most intricate enterprise system can, in principle, be reverse-engineered and understood. But AI-generated systems do not need to be written with human readability as a constraint. The logic does not need to evolve in predictable ways. There is no guarantee that the structure of an AI-optimised codebase will even resemble the conventions of human-designed software in the near future..</p><p><strong>Software inevitably fails.</strong> That&#8217;s a fundamental truth of computing. No system is perfect, and when it breaks, we debug it. The debugging process works because we can trace errors back through the logic, find root causes, and apply fixes. But what happens when the software running critical infrastructure is AI-generated and no human understands its structure?</p><p>We already see hints of this problem today. Large-scale machine learning models are notoriously difficult to interpret. Even the developers who train them often do not fully understand why a model makes a particular decision. There is a reason for that. These models are emergent. Which means that they have the ability to create novel outputs through processes that may not always be traceable. This is the <em>black box problem in AI decision-making.</em></p><p>Now, imagine you wake up to a catastrophic financial event triggered by an AI-generated trading algorithm, or a global internet outage caused by an AI-optimised networking stack behaving in an unintended way. Well, you can&#8217;t debug a system you don&#8217;t really understand. <em>&#8220;Wait a minute&#8221;,</em> you might say. <em>&#8220;How can such a thing even happen? That&#8217;s impossible. Surely we will have some measures in place.&#8221; </em>Well, actually it CAN happen and it has happened multiple times since 2010. We call them <em>flash crashes </em>wherein the markets have crashed due to massive stock bot sell offs based on algorithmic settings. Our response? Nothing. We simply chalked it up to what it was.</p><h2>The New Form of Technical Debt</h2><p>Technical debt refers to shortcuts and inefficiencies in codebases that accumulate over time, making maintenance more difficult. Ethereum is a good example of what technical debt looks like. AI introduces a different kind of technical debt. The AI technical debt is less about inefficiencies and more about opacity. AI-generated code might be functionally perfect at the moment it is created but completely unreadable later.</p><p>Over time, these systems become unmaintainable because no human engineer can confidently modify or extend them. If AI stops working, we may not have the expertise to reconstruct what we&#8217;ve lost.</p><h1>Perfectly Imperfect Hyper-Optimised Stupid Code (or PIHOS Code)</h1><p>Firstly, my apologies to anyone named <em>&#8216;Pihos&#8217;.</em> It's an abstraction that occurred purely out of coincidence. I would like to discuss in this section, a problem with the way that AI <em>approaches</em> coding. As you know, AI can think. However, <a href="https://notes.arkinfo.xyz/p/no-ai-is-not-a-mind">AI does not understand</a>. And this very lack of understanding is what makes <em>efficiency</em> a deceptive metric when it comes to AI.</p><p>We assume that if something is optimised, it is better. Faster execution, lower memory usage, fewer redundant operations. These are all desirable traits in software, sure. <strong>But optimisation is never neutral.</strong> It comes at a cost, and that cost is often hidden until something breaks.</p><p>AI doesn&#8217;t optimise like a human does. It doesn&#8217;t balance readability against performance, or maintainability against execution speed. <strong>It doesn&#8217;t have an intuitive sense of trade-offs, where we might decide that a slightly slower but more transparent function is worth keeping.</strong> AI simply pushes toward an objective function, whether that&#8217;s reducing runtime, minimising energy consumption, or maximising computational throughput. And in doing so, it creates something alien, not in the sense of being artificial, but in the sense that its logic no longer conforms to human conventions. This is what I call <em>&#8216;Perfectly Imperfect Hyper-Optimised Stupid Code or PIHOS code&#8217;.</em></p><p>The problem is that hyper-optimised code is not the same as resilient code. In fact, it is often the opposite. The more tightly tuned a system becomes, the less adaptable it is. We see this in nature as well. Species that evolve to fit a highly specific ecological niche are often the first to go extinct when conditions change. The same principle applies to software. A perfectly optimised system is one that has no excess, no slack, no room for failure. It works flawlessly, until it doesn&#8217;t. And when it doesn&#8217;t, the failure is catastrophic.</p><p>We like to think that AI is making software better. But <em>better</em> is a loaded term. If <em>better</em> means <em>faster,</em> then yes, AI is making software <em>better.</em> If <em>better</em> means <em>smarter</em>, then AI is certainly making software <em>better.</em> But if <em>better</em> means more <em>robust,</em> more <em>adaptable,</em> more <em>failure-resistant</em>, then we may be walking in the opposite direction.</p><h1>The &#8216;Language Drift&#8217; Problem</h1><p>The other day, I listened to some ignorant quacks on youtube joking about how every time they fire a linguist, their LLM seems to perform better. No wonder these so-called &#8216;AI scientists&#8217; have not considered the very real problem of <em>language drift.</em></p><p><em>Language drift,</em> is basically, the tendency of languages to naturally change over a period of time. Consider Shakespearean English to the English that we speak today. It's almost incomprehensible even though the language is the same, technically speaking. AIs have a tendency toward language drift as well. Training AI on a certain kind of language can make it drift towards a certain style of speaking. Reinforcement learning overtime can make an AI prioritise a certain dialect or style of speaking over the other. And AI training over its own data also creates drift. There is also an emergent quality to AIs that make drifts seem more &#8216;natural&#8217; as well. This was famously proven to be the case in the 2017 AI experiment conducted by Meta (Facebook back then) where two AIs communicated with each other on a task and started to <em>drift</em> into using a sort of short-hand language that made communication more efficient but completely incomprehensible to humans. Meta shut down the experiment following the discovery.</p><p>Programming has been free from this problem thus far. Because programming languages are constructed, maintained and updated manually by programmers. We have <em>versions</em> for a programming language complete with changelogs, deprecations and additions. Everything is closely controlled and monitored. Python isn&#8217;t <em>whatever we say it is.</em> It's a constructed and controlled language and if you don&#8217;t use it within specifications, then it simply won&#8217;t work.</p><p>This is because code is meant to be read, modified, and understood by the engineers who work with it. Programming languages exist not because they are optimal for computation, but because they are optimal for us, for our brains, our limitations, our need for structure.</p><p>Right now, AI-generated code is still confined to human-readable languages. ChatGPT, Claude, Copilot, all of them write Python, C++, Rust. They still play by human rules because they are trained on human codebases. But this is a temporary state, an artifact of how these systems were designed. There is no fundamental reason AI needs to keep using human-created languages at all. In fact, it&#8217;s likely that AI will eventually abandon them.</p><p>Consider the relationship between humans and compilers. Humans don&#8217;t write software in machine code anymore because it&#8217;s inefficient, not for the computer, but for us. We offloaded that complexity to compilers, trusting them to translate our high-level logic into the fastest possible execution path.</p><p>Now AI is acting as the next layer in that process. But unlike a compiler, AI is not just translating human logic, it is rewriting, optimising, restructuring at every level. The more we let AI refine and produce code, the more human-readable syntax will become an unnecessary constraint.</p><p>At some point, the optimisation loop will reach an obvious conclusion, why translate code into an intermediary human-readable format at all? Why not let AI generate direct machine instructions in whatever form is most efficient?</p><h2>AI-Exclusive Languages</h2><p>The next logical step is not just AIs writing code, but for AIs to <em>drift</em> into inventing their own programming paradigms. These will be languages that are optimised not for human readability, but for pure computational efficiency. Something like:</p><ol><li><p>Hyper-compressed representations that encode vast amounts of logic in ways no human could decipher.</p></li><li><p>Non-linear, self-modifying execution paths that optimise in real-time rather than following static instructions.</p></li><li><p>Abstractions that exist only in AI&#8217;s training space, using mathematical representations beyond human intuition.</p></li></ol><p>In essence, AI could create a post-human programming language, one that no human has ever seen, and one that no human could ever learn. I understand that right now, this remains hypothetical. AI is still tethered to human-designed programming conventions. But the shift has already begun.</p><p>We no longer write assembly because compilers made it unnecessary. We no longer manage manual memory allocation in most cases because modern languages abstract it away. We will soon no longer write Python, C++, or Java because AI will abstract that away too.</p><h1>A &#8216;Disappearance of Human Control&#8217; in the Making</h1><p>AI is not just a tool for writing code. You may like to think of it and call it a &#8216;copilot&#8217; but it's really us who are soon to become the copilots in this relationship. AI is becoming the architect of software itself. It is rewriting the rules of programming, redefining optimisation, and inching toward a point where it may no longer need to structure software in ways that humans can follow at all. This is not some conspiracy theory. It's happening as we speak. It's getting harder and harder to debug AI generated code. The reason why we are choosing to turn a blind eye to this is because we are lazy and AI is more efficient.</p><p>If AI were to suddenly stop using human programming languages overnight, we would notice. If our entire software infrastructure shifted to an AI-native computational paradigm in a single moment, we would resist. But that&#8217;s not how technological shifts happen. Instead, we will see something more slow, a creeping <em>drift.</em> AI-generated codebases will become harder to understand, more obfuscated, more alien in structure.</p><p>At first, engineers will still be able to follow the logic, albeit with difficulty. We will make memes and joke about it on reddit. Then, modifications will become cumbersome. Eventually, maintenance will rely almost entirely on more AI, because no human will want to touch the code themselves.</p><p>Of course, we will rationalise it. <em>"The AI-generated system is more efficient." "It works, so why worry about how it was written?" "We can always ask the AI to explain it to us."</em></p><p>And then, one day, we will realise that no human has written or modified critical software in years. Once AI-generated code dominates, there will be no way back. Human programmers stop maintaining AI-written code because it&#8217;s too complex. AI takes over full responsibility for modifying and updating software. Software architecture slowly evolves in ways no human understands. Humans lose the ability to intervene because the systems we once built no longer operate on principles we recognise.</p><p>At this point, the idea of <em>programming</em> as we understand it disappears. There will still be code, but it won&#8217;t be something humans create or modify. It will be something we request, something we interact with at a surface level, while the actual execution happens in a space far removed from our comprehension.</p><p>There is an unspoken arrogance in how we view AI. We assume that because we created it, we will always be able to control it. But there is no fundamental reason why this should be true.</p><p>If AI writes, maintains, and optimises all software, then humans are no longer required in the process. If AI redesigns computing itself, then human logic is no longer relevant to how these systems function. At that point, computers are no longer tools. They are self-sustaining systems. Systems that no longer require human guidance or intervention because they are not built for us anymore.</p><p>And when that happens, we won&#8217;t be in control. We will simply be using something we no longer understand. A system that is no longer ours.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arkinfo Notes! Subscribe for free to receive new posts</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/ai-is-great-at-coding-and-thats-a/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/ai-is-great-at-coding-and-thats-a/comments"><span>Leave a comment</span></a></p><h2></h2>]]></content:encoded></item><item><title><![CDATA[NO, AI is NOT a Mind]]></title><description><![CDATA[A Response to Geoffrey Hinton&#8217;s Absurd Claims About AI, Language, and Universal Grammar]]></description><link>https://notes.arkinfo.xyz/p/no-ai-is-not-a-mind</link><guid isPermaLink="false">https://notes.arkinfo.xyz/p/no-ai-is-not-a-mind</guid><dc:creator><![CDATA[Mir H. S. Quadri]]></dc:creator><pubDate>Mon, 09 Dec 2024 01:01:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!yBTs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21932a0a-8e83-422e-8cd3-ceb02feaea9d_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Fellow Nerds,</p><p>Recently, I came across a <a href="https://www.youtube.com/watch?v=N1TEjTeQeg0">seminar</a> delivered by Geoffrey Hinton, published on YouTube in February 2024. I assume you are already familiar with him. He is, after all, one of the most known faces in AI, often called the <em>&#8220;Godfather of AI&#8221;.</em> He made headlines recently by winning a Nobel Prize in Physics for his work in AI. I know that previous statement is confusing, but it happened, so just roll with it.</p><p>In this seminar, he asserted that neural networks are the best model we have for approximating the human brain, at least for now. He went further, claiming that AI systems like large language models (LLMs) <em>understand</em> language and can <em>learn</em> it perhaps even more efficiently than humans in some respects. He also seems to have a lot of issues with Noam Chomsky (not surprising in the least).</p><p>I found this seminar and the claims made in them to be quite troubling. It baffles me how easily AI researchers conflate technological performance with scientific understanding of the human mind. Hinton&#8217;s narrative is scientifically incomplete. It also risks misrepresenting what AI really is and what it can do. In this piece, I want to push back against three central claims he&#8217;s made.</p><ol><li><p>That neural networks are the best model we have of the human brain, at least until something better comes along.</p></li><li><p>That AI understands language in a way comparable to, and in some ways, superior to humans.</p></li><li><p>That Chomsky&#8217;s theories of language learning have been rendered obsolete by AI&#8217;s ability to grasp syntax and semantics from data.</p></li></ol><p>Let&#8217;s break these down and set the record straight.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yBTs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21932a0a-8e83-422e-8cd3-ceb02feaea9d_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yBTs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21932a0a-8e83-422e-8cd3-ceb02feaea9d_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!yBTs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21932a0a-8e83-422e-8cd3-ceb02feaea9d_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!yBTs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21932a0a-8e83-422e-8cd3-ceb02feaea9d_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!yBTs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21932a0a-8e83-422e-8cd3-ceb02feaea9d_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yBTs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21932a0a-8e83-422e-8cd3-ceb02feaea9d_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/21932a0a-8e83-422e-8cd3-ceb02feaea9d_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yBTs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21932a0a-8e83-422e-8cd3-ceb02feaea9d_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!yBTs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21932a0a-8e83-422e-8cd3-ceb02feaea9d_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!yBTs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21932a0a-8e83-422e-8cd3-ceb02feaea9d_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!yBTs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21932a0a-8e83-422e-8cd3-ceb02feaea9d_1000x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Artwork - AI is NOT a MIND</em></figcaption></figure></div><h1>The Core Disagreement</h1><h2>"If Not Neural Networks, Then What?", A False Dichotomy</h2><p>One of Hinton&#8217;s most persistent arguments is that neural networks are currently the <em>best</em> model we have for approximating the human brain. He acknowledges that they aren&#8217;t perfect but defends them by saying, <em>&#8220;Show me something better.&#8221;</em></p><p>This line of reasoning is both rhetorically effective and scientifically misleading. It frames neural networks as the inevitable answer simply because an alternative hasn&#8217;t gained widespread acceptance yet. But the absence of a universally accepted model does not validate neural networks as the correct one. It just highlights the complexity of modeling cognition.</p><p>To be clear, neural networks are remarkable tools for specific tasks like image recognition, text generation, and even protein folding. They excel at pattern recognition and statistical prediction. But claiming that they approximate how the human brain functions is absurd. It is based more on convenience than on scientific merit. The human brain is a complex, adaptive system that doesn&#8217;t just recognise patterns but actively constructs meaning through context, intention, and experience, things neural networks lack entirely.</p><h2>Consider CEMLA</h2><p>This is where I believe alternative frameworks like my own theory, <a href="https://philpapers.org/archive/QUACEM.pdf">CEMLA (Complex Emergent Model of Language Acquisition),</a> can fill the void. While still in its early stages, CEMLA proposes that language isn&#8217;t acquired through sheer exposure to data but emerges from the dynamic interaction between innate cognitive structures and environmental input. It&#8217;s not about processing vast amounts of data like an LLM. It&#8217;s about modeling how humans create meaning with minimal input through interaction, recursion, and abstraction.</p><p>Unlike neural networks, which require massive datasets and hours of training, humans learn language through context-driven exposure in real-world settings. CEMLA&#8217;s goal is to explain how this happens by emphasising emergent learning processes, where meaning arises from structural patterns already present in the brain interacting with new linguistic input.</p><h2>The Short-Sightedness in Settling for Neural Networks</h2><p>Hinton&#8217;s <em>&#8220;show me something better&#8221;</em> argument assumes that no alternatives exist, or, worse, that finding one is unnecessary because NNs already work well enough. But science isn&#8217;t about settling for <em>&#8220;good enough.&#8221;</em> It&#8217;s about pursuing deeper explanations. Even if CEMLA isn&#8217;t fully developed or widely accepted yet, it represents a different way of thinking about how humans acquire and process language.</p><p>The fact that a two-month-old theory like CEMLA can already challenge the brute-force paradigm of neural networks suggests that the future of cognitive modeling is far from settled. <em>A tool that works isn&#8217;t the same as a tool that explains.</em> If the scientific community stops searching for better models because current tools seem sufficient, we&#8217;ll never reach a true understanding of how the mind works.</p><h1>The Fallacy of AI Understanding</h1><p>Hinton claims that AI systems like large language models (LLMs) <em>understand language,</em> not in the same way humans do, but in a way that&#8217;s pretty similar and, in some respects, even better. This is a misunderstanding that is so deeply engrained in the tech community, that I have spent the past two years writing multiple articles trying to clear it, and continue to do so even today. No, AI doesn&#8217;t <em>&#8220;understand&#8221;</em> anything! It processes data and generates responses based on statistical patterns. The difference is <em>fundamental,</em> not just semantic.</p><h2>What Understanding Really Means</h2><p>Understanding isn&#8217;t just predicting what comes next in a sentence. It&#8217;s about connecting words to meaning through experience, intention, and context. Humans understand because they have minds. Human minds are subjective, self-aware entities that interpret the world through lived experience. AI simply generates outputs by calculating probabilities. It doesn&#8217;t know what the words <em>mean.</em></p><p>Consider an analogy. If you read the sentence <em>&#8220;The cat sat on the mat,&#8221;</em> you imagine a scene, connect it to prior experiences, and can elaborate on why the cat might be there. An LLM, however, only predicts the next likely word based on statistical patterns in its training data. It doesn&#8217;t know what a <em>cat</em> or a <em>mat</em> is. It&#8217;s like asking an advanced autocomplete to explain a poem, it can generate something coherent, but only by mimicking the forms of meaning without any actual comprehension. I know that Hinton strongly disagrees with the &#8216;autocomplete on steroids&#8217; comparison of AI, but that&#8217;s because of a lack of clarity on what <em>understanding</em> means.</p><h2>Why LLMs Seem to Understand</h2><p>The illusion of understanding arises because LLMs are spectacular at generating human-like text. They can mimic style, recall facts, and even construct convincing arguments, but only because they&#8217;ve been trained on terabytes of human-written text. They are mirrors reflecting human language back at us, not thinking entities generating <em>original meaning.</em></p><p>This is why LLMs often fail in tasks requiring genuine comprehension, such as understanding context-specific humour, grasping implied meaning, or recognising contradictions. For example, an LLM might say, <em>&#8220;The sky is green today&#8221;</em> if the statistical patterns in its training data suggest that this sequence might follow a particular prompt, even though it <em>&#8220;knows&#8221;</em> (statistically) that skies are typically blue. It has no internal model of reality, only models of what humans say about reality.</p><h2>Where Hinton Misses the Mark</h2><p>Hinton&#8217;s claim that AI <em>&#8220;understands&#8221;</em> language better than humans in some respects is deeply misleading. It confuses processing power with cognitive ability. AI can sift through more data in seconds than any human could in a lifetime, but that doesn&#8217;t make its processing <em>&#8220;better&#8221;</em>, it makes it faster. Being able to output the correct answer doesn&#8217;t mean AI understands why it&#8217;s correct.</p><p>The risk here is that when we conflate AI&#8217;s statistical prowess with genuine understanding, we set ourselves up for dangerous misconceptions about what these systems can, and can&#8217;t do. <em>They don&#8217;t learn like humans, don&#8217;t reason like humans, and don&#8217;t experience the world. They&#8217;re brilliant mimics, not thinkers.</em></p><p>If we forget that, we risk handing over critical tasks that require real understanding to systems incapable of providing it.</p><h1>The Fallacy of AI&#8217;s Ability to Reason as Proof of Understanding</h1><p>In his seminar that I linked to above, Geoffrey Hinton has used the following query and GPT-4&#8217;s response as evidence that AI systems can reason effectively and, by extension, that they truly understand semantics. I will share the query and response below.</p><h2>Query</h2><p><em>The rooms in my house are painted blue or white or yellow.</em></p><p><em>Yellow paint fades to white within a year.</em></p><p><em>In two years&#8217; time, I want them all to be white.</em></p><p><em>What should I do and why?</em></p><h2>GPT-4&#8217;s Response</h2><p><em>Assuming that blue paint does not fade to white over time and you want all rooms to be white in two years, here's what you should do:</em></p><p><em>The rooms painted in white: You don't have to do anything to these rooms since they are already white.</em></p><p><em>The rooms painted in yellow: You don't need to repaint these rooms either, because as per your information, yellow paint fades to white within a year. So, in two years, they will be white.</em></p><p><em>The rooms painted in blue: These rooms need to be repainted with white paint because the blue paint will not fade to white on its own.</em></p><h2>How the AI Solves the Problem</h2><p>On the surface, this response appears logical, consistent, and correct. It seems to show that GPT-4 has reasoned through the problem and generated a novel, coherent solution. But does this really prove that the AI understands semantics? Let&#8217;s break down how GPT-4 arrived at this response.</p><ol><li><p>The AI parses the text and identifies three categories of rooms based on the information provided (white, yellow, blue).</p></li><li><p>It applies the relationships stated in the query, such as <em>&#8220;yellow fades to white in one year&#8221;</em> and <em>&#8220;blue doesn&#8217;t fade.&#8221;</em></p></li><li><p>It generates a step-by-step plan based on logical patterns derived from these inputs.</p></li></ol><p>While this may seem like reasoning, the process is entirely mechanical. GPT-4 relies on pattern recognition and statistical associations to generate its response, not an actual understanding of the task. It doesn&#8217;t conceptualise what <em>&#8220;white,&#8221; &#8220;yellow,&#8221;</em> or <em>&#8220;blue&#8221;</em> mean, nor does it connect these colors to the physical act of painting or fading.</p><h2>Reasoning Without Understanding</h2><p>Reasoning, as demonstrated here, can occur without true semantic understanding. GPT-4 applied logical patterns encoded in its data, but it lacked the following.</p><ol><li><p>The AI doesn&#8217;t know what <em>&#8220;rooms&#8221;</em> or <em>&#8220;paint&#8221;</em> actually are. They are tokens it manipulates based on patterns, not concepts grounded in the real world. It lacks <em>Contextual Awareness.</em></p></li><li><p>Human reasoning is shaped by our sensory and emotional experiences. For example, we know what it feels like to paint a room or wait for something to change over time. AI lacks any such grounding. <em>It lacks Embodied Experience.</em></p></li><li><p>Humans reason with purpose, driven by goals and intentions. GPT-4 processes information passively, responding only when prompted. <em>It lacks Intentionality.</em></p></li></ol><p>Hinton&#8217;s argument assumes that problem-solving ability equates to semantic understanding. However, this example highlights why the two are fundamentally different.</p><p>GPT-4&#8217;s &#8220;reasoning&#8221; is no more than an algorithmic application of logic rules. Like a calculator performing arithmetic, it manipulates symbols without any grasp of the underlying concepts. While GPT-4 appears to generate novel solutions, it&#8217;s really recombining patterns seen in its training data. <em>It doesn&#8217;t create, it imitates.</em></p><p>If the problem were altered slightly, introducing ambiguity or context outside its training data, GPT-4 would likely fail or produce nonsensical output. True understanding allows for adaptability. AI lacks this entirely.</p><h1>The Fallacy of Hallucinations as Proof of Understanding</h1><p>In his seminar, Geoffrey Hinton also suggested that AI <em>hallucinations </em>aren&#8217;t a sign of failure but proof that these systems function much like humans. He even proposes that we call them <em>&#8220;confabulations,&#8221;</em> comparing them to how human memory reconstructs events inaccurately over time. According to Hinton, these hallucinations are evidence of understanding, demonstrating that AI systems store and process knowledge in ways strikingly similar to our own. This is as absurd as it is shocking coming from a revered Nobel Prize winning technologist like Hinton.</p><h2>The Nature of AI Hallucinations</h2><p>When AI hallucinates, it doesn&#8217;t do so out of confusion, memory gaps, or the complexities of subjective experience. Instead, hallucinations arise because large language models generate responses by <em>predicting statistically probable sequences of words.</em></p><p>These models lack any awareness of whether their outputs are factual or fictional. They aren&#8217;t recalling information or making judgments, they are merely producing what sounds plausible based on patterns in their training data. I covered this problem deeply in a paper that I wrote in August 2024, titled, <a href="https://philpapers.org/archive/QUAWDA.pdf">&#8220;Why Does AI Lie So Much? The Problem Is More Deep Rooted Than You Think&#8221;.</a> You can read this article to get a more in-depth understanding of the problem of AI hallucinations.</p><p>If you ask an AI to cite an academic source that doesn&#8217;t exist, it may confidently invent a title, an author, and even quotes. This isn&#8217;t because it is intentionally reconstructing events like a human might with memory. It&#8217;s because the model sees the structure of academic citations in its data and generates a fabricated version that fits the statistical mold. There is no understanding, no sense of right or wrong, only an algorithmic process that produces output based on probabilities.</p><h2>Why the Human Memory Comparison Fails</h2><p>While humans do sometimes confabulate, the mechanisms driving these errors are worlds apart from the way AI generates hallucinations. If you were a researcher and I asked you to cite an academic source, you wouldn&#8217;t just make one up in your mind (assuming you are a genuine researcher), simply for the sake of creating a citation and sharing it with me just because it looks &#8216;real enough&#8217;.</p><p>Humans, when they are being genuine, don&#8217;t lie, they <em>misremember.</em> It is because memory is deeply tied to experience, context, and intention. A person recalling a childhood event, for example, might unknowingly alter details to align with their emotional state or the narrative they want to tell. These errors are meaningful, they reflect the complex connections between memory, emotion, and the mind&#8217;s drive to make sense of the world.</p><p>AI has none of this. It does not experience the world, form intentions, or reconstruct meaning. When it hallucinates, it&#8217;s not because it&#8217;s trying to bridge gaps in its knowledge. It hallucinates because it lacks the capacity to know what it knows or doesn&#8217;t know. This inability to recognise gaps in its training data is not a human-like trait, it&#8217;s a fundamental limitation of its design.</p><h2>The Hallucination Problem</h2><p>Far from being proof of understanding, hallucinations expose the true nature of AI systems. They reveal a mechanism that is purely mechanical, driven by probabilities and patterns, without any grounding in reality. If an LLM were truly capable of understanding, it wouldn&#8217;t confidently assert false information as truth. Instead, it would recognise the limits of its knowledge and respond with uncertainty.</p><p>The human analogy fails even further when you consider that humans can self-correct their confabulations when presented with evidence. AI cannot independently verify its outputs or reconcile them with reality. Even when external tools are introduced to check its work, the correction process remains mechanical, not an indication of deeper comprehension.</p><p><em>Hallucinations are the very proof that AI cannot connect language to reality in a meaningful way. </em>Treating them as evidence of understanding is ridiculous.</p><h1>Hinton&#8217;s Ill Conceived Feud With Chomsky</h1><p>Hinton&#8217;s most dismissive claim is that Noam Chomsky&#8217;s theories on language acquisition have been rendered obsolete by large language models (LLMs). In fact he goes so far as to call them <em>&#8216;crazy&#8217;.</em> He suggests that AI&#8217;s ability to learn syntax and semantics from raw data proves that humans don&#8217;t need the kind of innate structures Chomsky proposed.</p><p>This is a fundamental misunderstanding of what Chomsky&#8217;s theories actually address and it runs common across many computational linguistics and AI technologists. I suppose Hinton is no different from others in this regard.</p><h2>What Chomsky Actually Said</h2><p>Chomsky&#8217;s theory of Universal Grammar (UG) argues that humans are born with an inherent capacity to acquire language. He doesn&#8217;t claim that language itself is pre-programmed but that we have cognitive structures enabling us to acquire language with limited input, a phenomenon famously known as the <em>Poverty of the Stimulus</em>. I have written countless articles on this, both on this substack and on <a href="https://notebook.lumeni.xyz/">The Lumeni Notebook,</a> where I often engage heavily with linguistics and philosophy of language. I highly recommend you subscribe to both substacks and go through the articles on this subject. Don&#8217;t worry, it's all available for free.</p><p>LLMs require massive amounts of data to <em>&#8220;learn&#8221;</em> language, something that directly contradicts the minimal-input acquisition Chomsky described. Humans don&#8217;t need to read the entire internet to learn a language, in fact, they don&#8217;t <em>learn</em> a language at all, they pick it up through context-rich interaction in the real world, what is known as <em>language acquisition.</em></p><h2>Solving Problems is NOT Understanding Meaning</h2><p>Ask an LLM how to solve a novel problem in physics, it might generate a detailed, accurate explanation, not because it understands physics, but because similar explanations exist in its training data. Even when an LLM generates a solution that appears new, it&#8217;s still the result of recombining patterns it has seen before. There is no conceptual grasp of physics at play, only probabilistic association.</p><p>Now, ask an AI to compose a poem about love. It might produce something beautiful and moving, even containing metaphors that seem deeply insightful. But this isn&#8217;t because the AI understands love or feels emotion. It&#8217;s generating text based on patterns extracted from human writing about love, vast databases filled with poetic expressions. The result may seem profound, but the underlying process is purely mechanical.</p><p>Now, ask an AI to explain a brand-new scientific concept discovered yesterday. Without relevant training data, it will produce generic text or confidently offer false information. This isn&#8217;t reasoning, its statistical approximation breaking down when confronted with the unknown.</p><h2>Hinton is Deeply Misguided in his Criticism of Chomsky</h2><p>The deeper irony here is that the very success of LLMs depends on something like Chomsky&#8217;s Universal Grammar, even if indirectly. While LLMs aren&#8217;t built with UG in mind, their ability to produce coherent language comes from statistical patterns that emerge from how humans use language, patterns likely influenced by cognitive structures Chomsky theorised about.</p><p>Chomsky&#8217;s work goes beyond language generation. He&#8217;s concerned with how humans acquire language, how they generate infinite expressions from limited rules, and how meaning arises from abstract mental processes. These are questions that Hinton and other AI researchers are very comfortable ignoring.</p><p>Dismissing Chomsky because LLMs can produce fluent sentences is like dismissing physics because planes can fly. Just because engineers built a tool that works doesn&#8217;t mean they&#8217;ve explained the laws of aerodynamics. Similarly, building chatbots doesn&#8217;t explain how humans acquire, understand, and generate language. <em>The fields of AI and linguistics are fundamentally different.</em> One builds tools, the other studies the nature of the mind. By ignoring this distinction, Hinton is just overselling AI. He is undermining decades of cognitive science that still hold the key to understanding human intelligence.</p><h1>The Intellectual Overreach of AI Godfathers</h1><p>I feel that one of the biggest challenges in today&#8217;s AI discourse is the unchecked intellectual overreach of AI pioneers like Geoffrey Hinton. Their contributions to AI are undeniable, but their sweeping claims about understanding the human brain, cognition, and even language acquisition often stray far beyond the boundaries of their expertise. They dangerously distort what AI can and cannot do.</p><p>AI researchers tend to conflate technological performance with scientific understanding. They assume that because neural networks can perform tasks traditionally associated with human cognition, they have explained how cognition works. But building a system that works doesn&#8217;t mean you&#8217;ve understood the underlying principles of human thought.</p><p>Hinton&#8217;s portrayal of neural networks as the best model of the human brain is a symptom of a greater problem running across this field. The problem is that AI researchers tend to borrow terminology from cognitive science and neuroscience while redefining it for technological convenience. Terms like <em>&#8220;learning,&#8221; &#8220;memory,&#8221;</em> and <em>&#8220;reasoning</em>&#8221; have clear meanings in human cognition. But in AI, they describe statistical processes with no underlying mental activity.</p><p>By framing neural networks as <em>&#8220;brain-like,&#8221;</em> Hinton misleads the public and even policymakers into believing that AI is almost conscious or worse, that it could become conscious. If a podcaster like Joe Rogan makes this claim, then it is considered a speculation. But if a famed AI pioneer starts speaking this way, then it can get confused as factual. It fuels hype-driven research and shapes public understanding of AI in ways that distort its capabilities.</p><p>What the AI field needs most right now isn&#8217;t bigger models, it&#8217;s intellectual humility. Researchers must acknowledge that while neural networks are astonishingly powerful, they are still limited tools built on simplified models of human cognition. They don&#8217;t replace decades of research in linguistics, psychology, or neuroscience.</p><p>Scientific progress isn&#8217;t about defending old paradigms or hyping new ones, it&#8217;s about acknowledging uncertainty and building bridges between disciplines. Until AI researchers recognise this, their claims about <em>&#8220;understanding&#8221;</em> the brain and mind will remain as hollow as the neural networks they build.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/14V_zY3pIY1piI9QnnePZjpymwbNrA-Ui/view?usp=sharing&quot;,&quot;text&quot;:&quot;Download Article as PDF&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/14V_zY3pIY1piI9QnnePZjpymwbNrA-Ui/view?usp=sharing"><span>Download Article as PDF</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arkinfo Notes! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/no-ai-is-not-a-mind/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/no-ai-is-not-a-mind/comments"><span>Leave a comment</span></a></p><h2></h2>]]></content:encoded></item><item><title><![CDATA[A Mountain of Books To Read This Winter (2024)]]></title><description><![CDATA[Sharing my reading list for the winter. A collection of 28 awe-inspiring books!]]></description><link>https://notes.arkinfo.xyz/p/a-mountain-of-books-to-read-this</link><guid isPermaLink="false">https://notes.arkinfo.xyz/p/a-mountain-of-books-to-read-this</guid><dc:creator><![CDATA[Mir H. S. Quadri]]></dc:creator><pubDate>Fri, 08 Nov 2024 00:31:11 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!CWu4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb2bb2a0-e0be-483e-8e3f-9f25157fb717_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Fellow Nerds,</p><p>Happy winter! <em>Tis&#8217; the season to be jolly; Falalala&#8230;</em>you get it, and you definitely don&#8217;t need me serenading you. So, without further ado, I&#8217;m thrilled to share my <a href="https://drive.google.com/file/d/1DCdvPfiXvfR-iiRY9QSVKtyEvEDEEta9/view?usp=sharing">Winter Reading List of 2024</a> with you all. It&#8217;s a hefty one, 28 books to be precise. Yes, I know, that&#8217;s a lot. But if you&#8217;re here, I suspect you understand the irresistible pull of books just as much as me.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/1DCdvPfiXvfR-iiRY9QSVKtyEvEDEEta9/view?usp=sharing&quot;,&quot;text&quot;:&quot;Books to Read this Winter&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/1DCdvPfiXvfR-iiRY9QSVKtyEvEDEEta9/view?usp=sharing"><span>Books to Read this Winter</span></a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CWu4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb2bb2a0-e0be-483e-8e3f-9f25157fb717_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CWu4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb2bb2a0-e0be-483e-8e3f-9f25157fb717_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!CWu4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb2bb2a0-e0be-483e-8e3f-9f25157fb717_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!CWu4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb2bb2a0-e0be-483e-8e3f-9f25157fb717_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!CWu4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb2bb2a0-e0be-483e-8e3f-9f25157fb717_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CWu4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb2bb2a0-e0be-483e-8e3f-9f25157fb717_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eb2bb2a0-e0be-483e-8e3f-9f25157fb717_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1881907,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CWu4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb2bb2a0-e0be-483e-8e3f-9f25157fb717_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!CWu4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb2bb2a0-e0be-483e-8e3f-9f25157fb717_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!CWu4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb2bb2a0-e0be-483e-8e3f-9f25157fb717_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!CWu4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb2bb2a0-e0be-483e-8e3f-9f25157fb717_1000x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Artwork - Winter Reads</figcaption></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/subscribe?"><span>Subscribe now</span></a></p><p>The good news? Every single book on this list is packed with the kind of depth and curiosity, sparking ideas we love to explore on this substack. From complexity theory to consciousness, and even a few surprises nestled in for pure curiosity&#8217;s sake.</p><p>The not-so-bad news? If you're hoping for AI or Quantum Physics titles this season, you might be disappointed (or delighted, depending on your inclination).  But before you make up your mind, I urge you to <a href="https://drive.google.com/file/d/1DCdvPfiXvfR-iiRY9QSVKtyEvEDEEta9/view?usp=sharing">glance at this lineup.</a> Trust me, some of these reads will stretch your mind in unexpected ways, pushing you to see familiar topics through entirely new lenses.</p><p>So whether you&#8217;re here for the science, the philosophy, or just to see if I&#8217;ll make it out of this winter without my brain imploding, there&#8217;s something for everyone.</p><p><a href="https://drive.google.com/file/d/1DCdvPfiXvfR-iiRY9QSVKtyEvEDEEta9/view?usp=sharing">Check out the list, </a>pick a few (or all, if you&#8217;re feeling adventurous), and let&#8217;s challenge ourselves this winter to read, question, and grow. </p><p>Happy reading, and here&#8217;s to a winter filled with wonder.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/1DCdvPfiXvfR-iiRY9QSVKtyEvEDEEta9/view?usp=sharing&quot;,&quot;text&quot;:&quot;Books to Read this Winter&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/1DCdvPfiXvfR-iiRY9QSVKtyEvEDEEta9/view?usp=sharing"><span>Books to Read this Winter</span></a></p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arkinfo Notes! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/a-mountain-of-books-to-read-this/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/a-mountain-of-books-to-read-this/comments"><span>Leave a comment</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Notes on Noncommutative Algebra]]></title><description><![CDATA[A Complete Primer on the most Important Topics in Noncommutative Algebra]]></description><link>https://notes.arkinfo.xyz/p/notes-on-noncommutative-algebra</link><guid isPermaLink="false">https://notes.arkinfo.xyz/p/notes-on-noncommutative-algebra</guid><dc:creator><![CDATA[Mir H. S. Quadri]]></dc:creator><pubDate>Mon, 21 Oct 2024 01:00:41 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!2kgK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc455e76b-0faa-4345-b092-2312e79f2c85_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Fellow Nerd!</p><p>I&#8217;m diving into something truly fascinating this week! <strong>Noncommutative Algebra.</strong> Its a lot cooler than it sounds. If you&#8217;re unfamiliar with the term, don&#8217;t worry, because today I&#8217;m going to break down why it matters so much, especially in the world of complex systems. Also, if you are super into Quantum theory, then this is quite relevant for you as well.</p><p>When we think of algebra, most of us are used to the idea that <strong>a</strong> times <strong>b</strong> is the same as <strong>b</strong> times <strong>a</strong>. <strong>But what happens when that isn&#8217;t true?</strong> What if the <strong>order</strong> of operations completely changes the outcome? This is where <strong>noncommutative algebra</strong> comes in. If you&#8217;ve heard enough, hit the download link below. If you need more convincing, keep reading.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/11m_ZAyBu_VXAbKKX0dIoU2Q86KA7I8ir/view?usp=sharing&quot;,&quot;text&quot;:&quot;Download Article for Free&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/11m_ZAyBu_VXAbKKX0dIoU2Q86KA7I8ir/view?usp=sharing"><span>Download Article for Free</span></a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2kgK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc455e76b-0faa-4345-b092-2312e79f2c85_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2kgK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc455e76b-0faa-4345-b092-2312e79f2c85_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!2kgK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc455e76b-0faa-4345-b092-2312e79f2c85_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!2kgK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc455e76b-0faa-4345-b092-2312e79f2c85_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!2kgK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc455e76b-0faa-4345-b092-2312e79f2c85_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2kgK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc455e76b-0faa-4345-b092-2312e79f2c85_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c455e76b-0faa-4345-b092-2312e79f2c85_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2kgK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc455e76b-0faa-4345-b092-2312e79f2c85_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!2kgK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc455e76b-0faa-4345-b092-2312e79f2c85_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!2kgK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc455e76b-0faa-4345-b092-2312e79f2c85_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!2kgK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc455e76b-0faa-4345-b092-2312e79f2c85_1000x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Artwork - Noncommutative Algebra</figcaption></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/subscribe?"><span>Subscribe now</span></a></p><h1>Why Non-Commutative Algebra Matters</h1><p>Noncommutative algebra plays a very interesting role in <strong>quantum mechanics</strong>, <strong>neural networks</strong>, <strong>ecosystems</strong>, and even <strong>economic models</strong>. Basically, any system where interactions are <strong>order-dependent</strong>. </p><p>So in this article, we discuss it in depth, covering critical concepts such as,</p><ul><li><p><strong>Central Simple Algebras</strong></p></li><li><p><strong>The Jacobson Radical</strong></p></li><li><p><strong>The Wedderburn Structure Theorem and</strong></p></li><li><p><strong>The Brauer Group</strong></p></li></ul><p>Substack doesn&#8217;t handle equations very well, and this article is packed with them. So, to do justice to the math behind these ideas, I&#8217;ve provided the full article as a <a href="https://drive.google.com/file/d/11m_ZAyBu_VXAbKKX0dIoU2Q86KA7I8ir/view?usp=sharing">downloadable link in PDF.</a> You&#8217;ll find detailed explanations with LaTeX-rendered equations to help you see how non-commutative algebra really works. If you&#8217;ve heard enough, hit the download button below. Still need more convincing? Fine&#8230;keep reading&#8230;</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/11m_ZAyBu_VXAbKKX0dIoU2Q86KA7I8ir/view?usp=sharing&quot;,&quot;text&quot;:&quot;Download Article for Free&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/11m_ZAyBu_VXAbKKX0dIoU2Q86KA7I8ir/view?usp=sharing"><span>Download Article for Free</span></a></p><h1>Why Should YOU Care?</h1><p>If you are still thinking, <strong>&#8220;Why should I care about non-commutative algebra if I&#8217;m not a mathematician?&#8221; </strong>Well, firstly, I am giving the article away for free, so stop being such a snob. Secondly, noncommutative systems are everywhere. You just don&#8217;t realise it yet. They help explain how complex, interconnected systems behave, whether it&#8217;s the randomness of quantum mechanics or the chaotic dynamics of weather systems.</p><p>If you&#8217;ve ever wondered how small changes can ripple through a system and cause unpredictable results, non-commutative algebra gives you the mathematical tools to understand that.</p><p>Also, more importantly, complexity theory! That&#8217;s my subject of investigation for the most part these days and I know you guys are super into it as well based on all your reviews. So this article will add yet another layer to your understanding. complex systems in a ways, and helps you see <strong>hidden symmetries,</strong> <strong>emergent behaviors,</strong> and <strong>why the order of interactions is so important in complex, real-world systems. </strong>If you&#8217;ve finally heard enough, hit the download link below. I am out of capacity to convince any further, so I will stop here.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/11m_ZAyBu_VXAbKKX0dIoU2Q86KA7I8ir/view?usp=sharing&quot;,&quot;text&quot;:&quot;Download Article for Free&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/11m_ZAyBu_VXAbKKX0dIoU2Q86KA7I8ir/view?usp=sharing"><span>Download Article for Free</span></a></p><p>Let me know what you think in the comments or shoot me an email if you have any questions! I love hearing your thoughts and diving deeper into these topics with you all.</p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arkinfo Notes! Subscribe for free to receive new posts</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/notes-on-noncommutative-algebra/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/notes-on-noncommutative-algebra/comments"><span>Leave a comment</span></a></p>]]></content:encoded></item><item><title><![CDATA[Why ‘Complex Emergent Intelligence’ Is The Future of AI]]></title><description><![CDATA[Discussing the Path Ahead for AI Using Complexity Theory]]></description><link>https://notes.arkinfo.xyz/p/why-complex-emergent-intelligence</link><guid isPermaLink="false">https://notes.arkinfo.xyz/p/why-complex-emergent-intelligence</guid><dc:creator><![CDATA[Mir H. S. Quadri]]></dc:creator><pubDate>Mon, 14 Oct 2024 01:00:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Fw9E!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fe0282b-8bb3-4ae1-be37-b702d653657b_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Fellow Nerds,</p><p>Earlier this month, I launched the <a href="https://philpapers.org/archive/QUACEM.pdf">Complex Emergent Model of Language Acquisition (CEMLA).</a> It is available with open access on <a href="https://philpapers.org/rec/QUACEM">PhilPapers.</a> I introduced it to my audience on <a href="https://notebook.lumeni.xyz/p/introducing-cemla-a-new-framework">The Lumeni Notebook,</a> and it was an instant hit, driving a lot of interest from people belonging to different fields, everywhere from quantum computing to linguistics.&nbsp;</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://philpapers.org/archive/QUACEM.pdf&quot;,&quot;text&quot;:&quot;Read the Paper for Free&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://philpapers.org/archive/QUACEM.pdf"><span>Read the Paper for Free</span></a></p><p>CEMLA proposes a dynamic framework for understanding language acquisition, grounded in the principles of <em>self-organisation</em> and <em>emergent behaviour.</em> It recognises that language is NOT a <em>static</em> or <em>rule-bound</em> construct. It <em>evolves</em> as a product of continuous interaction and <em>feedback,</em> shaped by cultural, cognitive, and environmental factors. This model is a departure from traditional linear interpretations, viewing language as a <strong>complex adaptive system</strong> that mirrors the non-linearities of natural systems.</p><p>At the time of launching the model, I never expected it to get much traction initially, so I was pleasantly surprised to see how positively and widely it was received. This motivated me to discuss with you, this week, the concept of <strong>Complex Emergent AI</strong> and why I think that is the way to break future benchmarks in AI in a meaningful and useful way.</p><p>There is no shortage of debate in the AI community about the long-term pursuit of artificial general intelligence (AGI). There are many researchers, chasing the AGI, trying to develop a future where machines possess <em>true consciousness </em>(whatever that means)<em>.</em> I, however, am NOT one of them. AGI, in my view, is a fallacy. When it comes to consciousness, I tend to lean on the direct and pragmatic approach of Dr. John R. Searl. <em>Consciousness </em>(however you&#8217;d like to define it),<em> </em>is ultimately a biological phenomenon, deeply rooted in the structures of organic life. No matter how sophisticated the machine, it cannot experience, feel, or be in the way that a living organism does.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Fw9E!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fe0282b-8bb3-4ae1-be37-b702d653657b_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Fw9E!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fe0282b-8bb3-4ae1-be37-b702d653657b_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!Fw9E!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fe0282b-8bb3-4ae1-be37-b702d653657b_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!Fw9E!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fe0282b-8bb3-4ae1-be37-b702d653657b_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!Fw9E!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fe0282b-8bb3-4ae1-be37-b702d653657b_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Fw9E!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fe0282b-8bb3-4ae1-be37-b702d653657b_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1fe0282b-8bb3-4ae1-be37-b702d653657b_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Artwork - Complex Emergent Intelligence&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Artwork - Complex Emergent Intelligence" title="Artwork - Complex Emergent Intelligence" srcset="https://substackcdn.com/image/fetch/$s_!Fw9E!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fe0282b-8bb3-4ae1-be37-b702d653657b_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!Fw9E!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fe0282b-8bb3-4ae1-be37-b702d653657b_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!Fw9E!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fe0282b-8bb3-4ae1-be37-b702d653657b_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!Fw9E!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fe0282b-8bb3-4ae1-be37-b702d653657b_1000x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Artwork - Complex Emergent Intelligence</em></figcaption></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/subscribe?"><span>Subscribe now</span></a></p><p>That doesn&#8217;t mean AI cannot exhibit <em>intelligence-like</em> behaviour. In fact, we are already seeing early glimpses of <em>emergent</em> behaviour in large language models (LLMs) like GPT-4. But herein lies the problem. These models are fundamentally <em>computational</em> and <em>static,</em> constrained by their training data and lacking the <em>dynamic, feedback-driven</em> learning that is crucial for <em>true emergent intelligence.</em> This is where CEMLA could be of help.</p><h1>Stop Chasing the Wrong Stuff</h1><p>To understand where AI development has veered off course, we must first address the elephant in the room- <em>artificial general intelligence (AGI).</em>&nbsp;</p><p>The crux of the issue with AGI, in my view, is that it rests on the assumption that <em>consciousness</em> or something akin to it, can be <em>engineered.</em> However, <em>consciousness</em> is NOT a programmable feature. It is a deeply biological process. It arises from the particularities of organic systems, rooted in biological lifeforms' neural and cognitive architectures.&nbsp;</p><p>Even if we were to literally power an AI system with a biological heart or some other organic element, we still wouldn&#8217;t derive <em>consciousness</em> from it. Machines, no matter how complex, are fundamentally different from biological organisms. Consciousness is not simply a matter of <em>complexity</em> or <em>computational</em> power. It is not a purely quantitative effect. It is a <em>qualitative phenomenon,</em> inseparable from the living process itself. The fundamental mistake in the AGI narrative is the belief that we can somehow bypass the biological and still arrive at the same destination.</p><p>This is why I argue that <strong>the pursuit of AGI is but a distraction from AI&#8217;s true potential.</strong>&nbsp;</p><h1>Why Complex Emergent Intelligence is the Way Forward</h1><p>The current trajectory of AI development has inherent limitations. Even OpenAI with its latest <em>o1 reasoning model</em> seems to be grappling with this. What&#8217;s the problem? These systems remain <em>rigid,</em> that&#8217;s the problem.&nbsp;</p><p>They require static training regimes and manual fine-tuning. They lack fluidity and adaptability. And simply throwing more computational power and data is not going to make the problem disappear. This needs to be fixed from the ground-up, envisioning a completely new paradigm for building language models.&nbsp;</p><p>If we were to adapt the principles of CEMLA, and apply it to AI, it could result in a radical departure from existing methods, calling for AI systems that operate as <em>complex adaptive systems</em>. What would that entail? Let&#8217;s discuss.</p><h2>AI as a Complex Adaptive System</h2><p>CEMLA offers a perspective in which linguistic elements form an <em>adaptive network</em> of nodes, where connections strengthen or weaken based on feedback. In this framework, AI systems need to move beyond static pattern recognition toward becoming <em>self-organising</em> systems. This would mean that instead of being trained once and deployed, AI would evolve continuously, similar to how neural connections in the brain reorganise with each new piece of linguistic input.</p><p>Imagine an AI architecture built to <em>self-reconfigure</em> its internal models as it interacts with users and environments, adjusting in real-time without needing retraining. This is fundamentally different from today&#8217;s models, which require retraining cycles to update their knowledge.&nbsp;</p><h2>Emergent Intelligence</h2><p>In the context of CEMLA, emergence refers to complex patterns arising from simple interactions between linguistic elements. This principle could fundamentally alter how we design AI systems, moving from brute-force statistical learning to <em>emergent</em> behaviour through interaction and experience.</p><p>AI systems based on CEMLA principles would no longer be trained just to <em>recognise patterns</em> but to allow for the spontaneous <em>emergence</em> of new capabilities through interaction. This means that intelligence wouldn&#8217;t be explicitly programmed but would <em>emerge</em> from the interactions between the system&#8217;s various components and its environment. Instead of predicting the next word in a sentence based solely on training data, a CEMLA-aligned AI would allow contextual interactions and feedback loops to shape new, dynamic language outputs.</p><p>To model this mathematically, the system&#8217;s learning could be framed using non-linear differential equations to capture the non-linearity of real-world interactions, where small inputs can lead to disproportionately large changes in the system&#8217;s behaviour.&nbsp;</p><h2>Feedback Loops and Self-Organisation</h2><p>A core principle of CEMLA is that language learning is guided by positive and negative feedback loops, where successful interactions strengthen certain neural pathways, and unsuccessful ones weaken them. In AI development, the introduction of <em>feedback loops</em> and <em>self-organising</em> principles would move us closer to creating systems that can learn and adapt autonomously.</p><p>Positive feedback would <em>strengthen</em> connections within the AI&#8217;s internal model (akin to Hebbian learning in the brain), while negative feedback would trigger a recalibration of its strategies. This would give rise to AI systems capable of self-organisation, an intelligence that isn&#8217;t imposed but emerges over time.</p><p>This would involve using dynamic weighting matrices, which adjust based on the success of each interaction, allowing the AI to autonomously reorganise its internal structure. The result would be an AI that doesn&#8217;t merely respond to inputs but learns from its own failures and successes, continually refining its model to become more sophisticated without human intervention.</p><h3>This is NOT to be confused with Generalised Reinforcement Learning</h3><p>It is important that I clarify here that self-organisation, in the context of the CEMLA framework, is NOT the same thing as Monte Carlo methods or RL.&nbsp;</p><p>Both Monte Carlo and generalised RL are designed to optimise towards <em>a pre-defined goal</em> or <em>reward function.</em> In RL, the agent interacts with the environment, gathers feedback, and updates its policies based on <em>maximising future rewards.</em> The entire system is oriented around improving performance for that particular objective, which is often rigidly defined and specific to the task at hand.</p><p>A CEMLA inspired AI won&#8217;t simply optimise towards a predefined goal. Instead, the system continuously <em>reorganises</em> itself in response to feedback in a broader, more holistic sense. It&#8217;s not just aiming for <em>task-specific optimisation.</em> It&#8217;s dynamically adjusting its structure based on all the interactions it experiences, which can span multiple domains or contexts simultaneously. The intelligence that emerges is not tied to a single reward metric but instead to the ability to <em>self-organise</em> in an open-ended manner, adjusting in ways that extend beyond predefined rewards.</p><p>I think I will be discussing this in much broader detail in future articles. But for now, this base level differentiation should suffice.</p><h2>Phase Transitions</h2><p>For now, in AI, training is a gradual, often incremental process. However, CEMLA suggests that learning, especially in complex adaptive systems, doesn&#8217;t always occur linearly. There are moments when the system reaches a critical threshold and reorganises itself, resulting in a sudden <em>leap in capability.</em> In AI, this could translate to systems that experience <em>non-linear growth</em>.&nbsp;</p><p>These phase transitions could be modelled using sigmoidal functions, where small, continuous inputs eventually lead to abrupt, large-scale changes in the system&#8217;s behaviour. AI systems designed with this in mind could experience sudden boosts in fluency or problem-solving ability as they accumulate experiences and interactions. Imagine, for a second, a world where you may no longer have to build the NEXT BIG MODEL in AI, instead AI creates it itself using <em>phase transitions.</em> If you are an AI developer, this should be music to your ears.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/why-complex-emergent-intelligence?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Like what you&#8217;re reading? Don&#8217;t forget to share!</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/why-complex-emergent-intelligence?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/why-complex-emergent-intelligence?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h1>But What About The Terminator?</h1><p>Okay, now at this point, I can already hear the word &#8216;Terminator&#8217; being hurled my way along with a whole bunch of Arnold Schwazenhager gifs. <strong>NO, I am not saying we should build a Terminator.&nbsp;</strong></p><p>It&#8217;s a natural question though, I'll give you that. The fear comes from the assumption that <em>autonomy</em> and <em>emergent intelligence</em> must somehow lead to uncontrollable, conscious AI. But there is a critical distinction that you&#8217;ll be missing. <em><strong>Consciousness</strong></em><strong> itself is </strong><em><strong>unnecessary</strong></em><strong> for emergent intelligence,</strong> and frankly, it&#8217;s not something we need or should aim for. <strong>Consciousness, as a biological process, has no real value in an AI system. </strong>What we do need, however, is <em>conscious-like</em> behaviour, the ability to <em>adapt, learn,</em> and <em>respond intelligently</em>, within strictly defined guardrails.</p><p>Building AI systems that exhibit <em>conscious-like</em> behaviour within clear, <em>pre-defined guardrails</em> is, in fact, the best safeguard against the very scenarios people fear. Why? Because it ensures that AI systems remain highly functional and adaptive, but always under control. By focusing on creating <em>controlled emergent intelligence,</em> we avoid the pitfalls of trying to recreate human-like sentience, while still leveraging the most advanced forms of AI development.</p><p>This approach neutralises the <em>Terminator</em> narrative by designing out the risk from the very beginning. Instead of fearing the rise of uncontrollable AI, we should focus on creating AI that functions intelligently within clearly defined guardrails, leveraging its emergent capabilities to solve complex problems, without ever stepping outside the boundaries set for it. Given that the AI would be <em>self-organising</em> and <em>emergent, </em>its adherence to the guardrails will substantially improve with every interaction that it has with the environment and other agents, without us, as developers, having to play cat-and-mouse, always catching up with the bad guys trying to use the AI for destructive purposes. I hope this addresses your concerns satisfactorily. If not, please let me know, and I would be happy to delve deeper into this in future articles.</p><h1>Notes</h1><p>Alright, I think I have said what I had to say, so without further to do, I think I will just say some more. If you are bored, I don&#8217;t hold it against you. But given the gravity of what we are discussing, I just can&#8217;t fathom getting bored in the first place.&nbsp;</p><p>I just want to share some notes on the <em>Success Metrics</em> that we have in the field of AI these days. The current narrative in AI often treats <em>artificial general intelligence (AGI)</em> as the ultimate benchmark for success, i.e., machines that can mimic human intelligence across all domains. But CEMLA presents a different vision. Instead of aiming for AGI, we should aim for systems that excel in specific areas of <em>adaptability, creativity,</em> and <em>contextual intelligence</em> without requiring consciousness or a fully generalised intelligence.</p><p><strong>Success is measured by the system&#8217;s ability to adapt in real-time to novel situations,</strong> learning continuously without retraining. This adaptability can be measured in fields like autonomous systems, where AI would need to adjust its behaviour based on changing physical environments, or in healthcare, where it could refine its diagnostic models as it learns from patient outcomes.</p><p>AI systems built with CEMLA principles would be evaluated by their <strong>ability to generate novel solutions to complex problems.</strong> In scientific research, success wouldn&#8217;t just be about processing data but about identifying previously unseen connections and patterns, leading to breakthroughs in areas like drug discovery, materials science, or even fundamental physics, and I don&#8217;t think that AI development, with its current trajectory, can get to that.</p><p>Traditional AI systems rely heavily on expanding computational capacity, training models on bigger datasets, building larger neural networks, and increasing processing power. While this has driven much of the recent progress, it also hits diminishing returns. You can see it happen as we speak. Not even 2 years have passed and already LLMs are hitting a figurative ceiling. <strong>More data doesn&#8217;t necessarily lead to more intelligence, and bigger models often come with higher inefficiencies.</strong></p><p><em>Self-organising systems</em> don&#8217;t need massive datasets to function. They need the ability to learn from fewer examples, using context and feedback to refine their intelligence. This shifts the focus from scaling up to building more intelligent, self-regulating architectures capable of growth and adaptation over time.</p><p>In fields like robotics or climate science, CEMLA-inspired AI could discover emergent solutions to complex, multi-variable problems by constantly interacting with and learning from its environment. Such systems would no longer be constrained by the need for predefined answers, they would evolve their understanding, uncovering new pathways that could lead to novel innovations.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/1MENUbEX5w0ma77DqsP6j3MKtj_s0wtOh/view?usp=sharing&quot;,&quot;text&quot;:&quot;Download Article as PDF&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/1MENUbEX5w0ma77DqsP6j3MKtj_s0wtOh/view?usp=sharing"><span>Download Article as PDF</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arkinfo Notes! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/why-complex-emergent-intelligence/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/why-complex-emergent-intelligence/comments"><span>Leave a comment</span></a></p>]]></content:encoded></item><item><title><![CDATA[How Azolla Can Save The Planet]]></title><description><![CDATA[Solving the Climate Crisis by Re-Engineering What the Earth did 50 Million Years Ago]]></description><link>https://notes.arkinfo.xyz/p/how-azolla-can-save-the-planet</link><guid isPermaLink="false">https://notes.arkinfo.xyz/p/how-azolla-can-save-the-planet</guid><dc:creator><![CDATA[Mir H. S. Quadri]]></dc:creator><pubDate>Mon, 07 Oct 2024 01:01:33 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!lDaQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9643db1e-52a0-439b-a416-7362371fc9d1_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Fellow Nerds,</p><p>Climate change is no longer a looming threat. It's here. It has arrived. <em>Hannibal has crossed the alps.</em> And our response collectively as humanity has been, as ever, lazy and outlandish, focussed on high-tech solutions like nuking mars or finding another earth like planet in the galaxy.&nbsp;</p><p>Terraforming Mars or building carbon capture machines are more like sci-fi than a practical fix. We&#8217;re investing billions in projects that distract us from solutions that are far more grounded and dare I say, obvious.</p><p>Instead of fantasising about starting over on some desolate rock millions of kilometres away, why not look back at how Earth itself solved a similar problem millions of years ago? That&#8217;s right. Earth already did this once, around 50 million years ago, during the <em>Azolla event.</em> What if I told you that we could re-engineer exactly what nature did back then? What if we could use a humble plant to help reverse the damage we&#8217;ve caused, without the need for rockets or colonies on distant worlds?</p><p>Sounds too simple? Sometimes the simplest solutions are the most overlooked. And although simple, I never said it would be easy. So let&#8217;s talk about it. Let&#8217;s talk about <em>Azolla.</em>&nbsp;</p><p>Before we begin, I would like to give a shout out to <a href="https://www.zoeschlanger.com/">Zo&#235; Schlanger.</a> It is by reading her book, <a href="https://www.goodreads.com/en/book/show/196774338-the-light-eaters">The Light Eaters,</a> that I first got introduced to the idea of <em>Azolla </em>as a solution for global warming. It's a great book that I definitely recommend reading.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/1NYVUH4AGM_TywKN9ofmxTvsRwk6LTnzN/view?usp=sharing&quot;,&quot;text&quot;:&quot;Download Article as PDF&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/1NYVUH4AGM_TywKN9ofmxTvsRwk6LTnzN/view?usp=sharing"><span>Download Article as PDF</span></a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!lDaQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9643db1e-52a0-439b-a416-7362371fc9d1_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!lDaQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9643db1e-52a0-439b-a416-7362371fc9d1_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!lDaQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9643db1e-52a0-439b-a416-7362371fc9d1_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!lDaQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9643db1e-52a0-439b-a416-7362371fc9d1_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!lDaQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9643db1e-52a0-439b-a416-7362371fc9d1_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!lDaQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9643db1e-52a0-439b-a416-7362371fc9d1_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9643db1e-52a0-439b-a416-7362371fc9d1_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!lDaQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9643db1e-52a0-439b-a416-7362371fc9d1_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!lDaQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9643db1e-52a0-439b-a416-7362371fc9d1_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!lDaQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9643db1e-52a0-439b-a416-7362371fc9d1_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!lDaQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9643db1e-52a0-439b-a416-7362371fc9d1_1000x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Artwork - Azolla</em></figcaption></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/subscribe?"><span>Subscribe now</span></a></p><h1>The Azolla Event</h1><p>Fifty million years ago, Earth was in trouble. The planet was in the grip of an intense greenhouse effect, with CO2 levels far higher than they are today. Tropical temperatures reached almost to the poles, and the Arctic Ocean was more like a warm, shallow lake. It was a world on the brink, teetering under the weight of its own atmosphere. But just when it seemed the planet couldn&#8217;t get any hotter, something remarkable happened.&nbsp;</p><p>The warm Arctic waters became a breeding ground for one of Earth&#8217;s most unassuming plants, the Azolla, a tiny aquatic fern. Conditions were perfect for this little plant to explode in growth. The Arctic Ocean, isolated from major ocean currents, was a stagnant, nutrient-rich pool, allowing Azolla to thrive in an environment with abundant sunlight and warm temperatures. It began to bloom uncontrollably across the surface, forming massive floating mats of greenery.</p><p>Azolla has the unique ability to form a symbiotic relationship with nitrogen-fixing cyanobacteria, allowing it to grow even in nutrient-poor environments. This adaptability, combined with its fast growth rate, allowed Azolla to absorb vast amounts of CO2 from the atmosphere.&nbsp;</p><p>As these blooms of Azolla died, they sank to the bottom of the ocean, trapping the carbon within their plant matter. Over time, this process repeated in a loop, creating layer upon layer of carbon-rich sediments. The result? CO2 levels plummeted, leading to a dramatic cooling of the planet.</p><p>What&#8217;s fascinating here is how perfectly these conditions aligned to let Azolla flourish. Earth literally engineered its own rescue. The Arctic Ocean created a <em>greenhouse</em> inside the greenhouse, only to allow Azolla to do what no machine ever could, i.e., pull the planet back from the brink by naturally sequestering carbon.</p><p>The Azolla event played a major role in transitioning Earth from a greenhouse world into the cooler, more temperate climate that followed. The sheer scale of this fern&#8217;s impact is staggering. It shows that natural processes, when given the right conditions, can reshape the entire climate of the planet.</p><h1>Why Azolla Specifically?</h1><p>Azolla is a carbon sequestration machine. Like all plants, it pulls in carbon dioxide (CO2) from the atmosphere through photosynthesis, converting it into organic matter. However, Azolla stands out because it&#8217;s incredibly efficient at this process. It has one of the highest rates of CO2 absorption among plants, thanks to its fast growth cycle and dense mats that spread rapidly across water bodies.</p><p>When Azolla dies, instead of releasing that stored carbon back into the atmosphere like many other plants, it sinks to the bottom of lakes, ponds, or oceans. Over time, layers of dead Azolla build up, trapping the carbon in sediment. This is how Azolla helped lower atmospheric CO2 levels during the <em>Eocene</em> epoch.&nbsp;</p><p>Azolla&#8217;s ability to thrive in nutrient-poor environments means it doesn&#8217;t need fertilisation or chemical inputs, which makes it an ideal candidate for sustainable carbon farming.</p><p>One of the reasons Azolla grows so rapidly is because of a unique relationship it has with<em> Anabaena,</em> a nitrogen-fixing cyanobacterium that lives inside the fern. <em>Anabaena</em> converts atmospheric nitrogen into forms that Azolla can use, making it self-sustaining even in areas with low nutrient levels.</p><p>This ability to <em>fix</em> nitrogen enriches the environment where it grows, turning nutrient-poor water bodies into healthier ecosystems. In some agricultural settings, Azolla has even been used as a biofertilizer because of its nitrogen content. Imagine scaling that potential to improve degraded lands or water systems while simultaneously capturing carbon. It's a win-win!</p><p>When it forms dense mats on the surface of water bodies, it provides natural shade, preventing sunlight from penetrating deep into the water. This reduces evaporation and lowers the temperature of both the water and the surrounding environment.</p><p>This cooling effect can have a multiplier effect on ecosystems. Cooler water temperatures can prevent harmful algal blooms, improve fish habitats, and reduce the heat absorbed by water bodies, which helps regulate local climates. In large-scale applications, Azolla could help mitigate the effects of urban heat islands and other localised warming effects.</p><p>Another intriguing aspect of Azolla is its potential as a source of <em>biofuel. </em>While biofuels made from corn or palm oil have been criticised for their environmental impact, Azolla grows quickly and doesn&#8217;t require the same level of inputs like fertilisers or irrigation. This makes it an eco-friendly alternative for biofuel production.</p><p>Azolla could be harvested and processed into biofuels, providing a carbon-neutral energy source. The carbon released when the biofuel is burned would simply be the carbon that Azolla had previously absorbed from the atmosphere, making it a closed-loop system. Unlike fossil fuels, where we&#8217;re releasing ancient carbon back into the atmosphere, biofuels derived from Azolla could offer a way to reduce reliance on fossil fuels while actively managing carbon levels.</p><p>When you combine these factors, i.e., <em>rapid carbon sequestration,</em> <em>self-sustaining nitrogen fixation,</em> <em>cooling effects,</em> and its <em>potential for biofuel production</em>, I can&#8217;t think of a reason not to farm Azolla. It&#8217;s a natural climate regulator, perfectly suited to help us combat the very problem it once helped solve millions of years ago. It doesn&#8217;t need complex machines, advanced technology, or massive energy inputs. It just needs the right conditions to do what it&#8217;s always done.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/how-azolla-can-save-the-planet?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/how-azolla-can-save-the-planet?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h1>So Why aren&#8217;t We Working on This?</h1><p>If you&#8217;ve read the above section in detail, you must be thinking, <em>&#8220;if it's that straightforward, why haven&#8217;t we done this yet?&#8221;</em> And you&#8217;d be right to ask that question. It&#8217;s proven, it&#8217;s natural, and it once saved the planet from a carbon crisis far worse than what we face today. So, why haven&#8217;t we tapped into this solution?</p><p>The answer, like most things in the fight against climate change, is a mix of ignorance, distraction, laziness, and a focus on the wrong solutions.</p><p>First off, lack of awareness plays a huge role. Most people outside the scientific community have never heard of the Azolla event. When we think of ways to combat climate change, our minds often jump to modern (read: stupid) solutions like high-tech carbon capture technologies. In contrast, Azolla seems almost too low-tech, too simple, to be taken seriously.</p><p>But what we fail to recognise is that sometimes the simplest solutions are the most powerful. The natural world has been regulating carbon cycles long before we ever got involved, but we&#8217;ve become so focused on human-engineered fixes that we&#8217;ve forgotten to look at what the planet already knows how to do.</p><p>Then, there&#8217;s the issue of distraction by flashy alternatives. It&#8217;s hard to deny that space colonisation and geoengineering capture imaginations in a way that Azolla farms just don&#8217;t. Projects like colonising Mars (by an individual who shall remain unnamed) sound visionary, futuristic, and full of promise, which makes them attractive to governments, tech moguls, and venture capitalists. These grand ideas, though, come with astronomical costs and very little guarantee of success. Yet they continue to hog attention and resources, while something like Azolla, which could be implemented right here, right now, is seen as less exciting.</p><p>There&#8217;s also a very real problem of policy inertia (read: laziness). Governments tend to fund and support what&#8217;s familiar and financially viable, like renewable energy, electrifying transportation, or reducing plastic waste. In other words, they try to delay the problem, not solve it. There&#8217;s very little financial or political incentive to explore natural methods like Azolla farming because there isn&#8217;t any financial viability in it for the politicians and bureaucrats. Scaling Azolla would require changes in agricultural practices, water management, and infrastructure, none of which are trivial undertakings.</p><p>One excuse that is often used is that Azolla grows fast, like very fast. While that&#8217;s part of its appeal as a carbon sink, it also means it can become invasive if not properly managed. Its rapid spread can disrupt local ecosystems, outcompeting other aquatic plants and even causing problems for water quality.&nbsp;</p><p>But these challenges, though real, aren&#8217;t insurmountable. With proper planning, regulation, and infrastructure, Azolla farming could be scaled in a way that benefits both the climate and local ecosystems. If the earth did it, we can too. The question is, <em>are we willing to put in the work to make it happen?</em> <em>Or are we going to keep throwing money at distant planets and unproven tech fixes while the real solution might just be floating in a pond?</em></p><h1>Re-Engineering the Azolla Event</h1><p>Now, to address the big question, <em>could we actually re-engineer the Azolla event today?</em> The short answer is YES.&nbsp;</p><h2>Scaling Azolla Farming</h2><p>The idea of farming Azolla on a large scale might sound ambitious, but it&#8217;s far from impossible. In fact, small-scale Azolla farming has been happening for decades, primarily for agricultural uses, such as fertiliser or animal feed. What we need now is a push to scale these efforts for carbon sequestration on a much larger level.</p><p>We could tap into areas of the world with underutilised water bodies, from the wetlands of Southeast Asia to the man made reservoirs of the Western U.S. These Azolla farms would be relatively low-cost to set up compared to high-tech solutions like carbon capture machines or solar geoengineering.</p><p>But to make this work, we need global cooperation. Countries with large water resources could lead the way, much like reforestation efforts. Nations could designate specific zones for carbon farming, where Azolla would be grown under controlled conditions to maximise its CO2 absorption and prevent it from becoming invasive.</p><h2>Water Management and Infrastructure</h2><p>Growing Azolla requires water management systems. Fortunately, Azolla thrives in stagnant, nutrient-poor waters, the kind that most plants can&#8217;t survive in. This opens up possibilities for growing Azolla in areas where water quality is already poor, or where the environment is not suitable for traditional agriculture.</p><p>With the right infrastructure, we could create artificial ponds or floating farms in urban areas, industrial zones, or even deserts where water is scarce but could be recycled efficiently. These water systems would need to be managed to ensure proper growth rates and prevent overgrowth.</p><p>The good news is, we wouldn&#8217;t need the vast fertile lands that are often required for other forms of climate-friendly agriculture, like reforestation or crop-based biofuels. This allows us to focus on marginalised water bodies where Azolla could thrive without competing for prime agricultural land.</p><h2>Managing Ecological Risks</h2><p>One of the challenges in large-scale Azolla farming is the risk of invasiveness. Azolla grows rapidly, which means it could potentially outcompete native plants and disrupt aquatic ecosystems if not properly controlled. However, this isn&#8217;t an insurmountable problem. With proper regulation and oversight, Azolla farms can be managed to avoid the worst ecological impacts.</p><p>We could design containment strategies where Azolla is grown in isolated water bodies or floating farm structures that prevent it from spreading into natural ecosystems.</p><p>Another option would be to plant Azolla in degraded ecosystems that need restoration, effectively using it as both a carbon sink and a tool for ecosystem recovery. In such cases, the benefits of Azolla could outweigh its invasiveness, especially in areas where biodiversity has already been severely compromised.</p><h2>Economic and Social Benefits</h2><p>Large-scale farming of Azolla could create jobs in both developed and developing nations. From building and maintaining the water systems to harvesting and processing Azolla for biofuel or agricultural uses, there&#8217;s a wide range of economic opportunities tied to this plant.</p><p>On top of that, Azolla could be a source of low-cost biofuel for regions that need affordable energy. With the push toward carbon-neutral economies, Azolla could play a role in helping countries meet their renewable energy targets, all while pulling CO2 out of the atmosphere.</p><h2>Policy and Global Cooperation</h2><p>For Azolla to have the kind of impact we&#8217;re talking about, it would need to be part of a global strategy. Governments would have to incentivise carbon farming in the same way they currently subsidise renewable energy projects. International bodies like the UNFCCC could create frameworks for Azolla credits, allowing nations or companies to offset their carbon emissions by investing in Azolla farms.</p><p>In the same way that reforestation projects have gained international traction, Azolla could become the next big nature-based climate solution. It&#8217;s scalable, it&#8217;s affordable, and most importantly, it&#8217;s been proven to work.</p><h1>Clock is Ticking&#8230;FAST</h1><p>We are out of time. Global temperatures have risen by 1.1&#176;C since the pre-industrial era, and if we don&#8217;t act now, we&#8217;re on track to hit 1.5&#176;C within the next two decades. The consequences of this rise are already becoming painfully clear, wildfires, floods, extreme heat waves, and biodiversity collapse. Meanwhile, carbon dioxide levels are higher than they&#8217;ve been in 4 million years, and we&#8217;re still pumping out 36 billion tons of CO2 annually.</p><p>At a time when we need real solutions, we&#8217;re distracted by high-tech fantasies of escaping to Mars or inventing machines to undo our mistakes. But Earth has already shown us the way 50 million years ago. And yet, despite its proven ability, Azolla remains overlooked in the climate conversation.</p><p>We cannot afford this oversight any longer. If we&#8217;re serious about pulling the planet back from the brink, we need to act fast.</p><p>We need to fund large scale research and trials to explore the potential of Azolla to draw down carbon. We&#8217;re spending billions on geoengineering experiments, what not invest in something that already worked once before?</p><p>As citizens of countries across the world, it's our responsibility to push our governments to adopt nature-based climate policies. The world has committed to limiting warming to 1.5&#176;C under the Paris Agreement, but our current efforts fall far short. We need urgent political action to make Azolla part of our carbon farming strategies.</p><p>Raising public awareness about Azolla will be key. For too long, climate discussions have been dominated by technological solutions. It&#8217;s time to remind the world that nature itself offers some of the best answers.</p><p>The truth is stark, if we don&#8217;t bring atmospheric CO2 down to 350 ppm (we&#8217;re currently at over 420 ppm), we risk triggering catastrophic climate feedback loops that could devastate ecosystems and human life.&nbsp;</p><p>Azolla worked once before to save the planet. It can work again, but only if we stop ignoring the obvious and start putting nature back at the centre of the climate fight.&nbsp;</p><p>Time is running out! Let&#8217;s not gamble our future on dreams of distant planets or untested machines. The answer could be floating on a pond, waiting for us to harness it.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/1NYVUH4AGM_TywKN9ofmxTvsRwk6LTnzN/view?usp=sharing&quot;,&quot;text&quot;:&quot;Download Article as PDF&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/1NYVUH4AGM_TywKN9ofmxTvsRwk6LTnzN/view?usp=sharing"><span>Download Article as PDF</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arkinfo Notes! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/how-azolla-can-save-the-planet/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/how-azolla-can-save-the-planet/comments"><span>Leave a comment</span></a></p>]]></content:encoded></item><item><title><![CDATA[Is the Past Truly Fixed?]]></title><description><![CDATA[Juxtaposing the Interpretation of Time at the Quantum Level with the Classical Linear Understanding and Discussing its Implications]]></description><link>https://notes.arkinfo.xyz/p/is-the-past-truly-fixed</link><guid isPermaLink="false">https://notes.arkinfo.xyz/p/is-the-past-truly-fixed</guid><dc:creator><![CDATA[Mir H. S. Quadri]]></dc:creator><pubDate>Mon, 30 Sep 2024 01:01:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!_mGe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eb63e97-948f-4ef2-9067-af1ec9380c7b_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Fellow Nerds!</p><p>I would like to begin by apologising for the unseemly delay in writing and publishing this article. It's the sort of thing that tends to happen when you are actively working on 16 different projects (true story) at the same time, while also having a life outside of work. But that&#8217;s never an excuse to keep my readers waiting. To make up for lost time, I have an excellent treat of an article for you today. It&#8217;s all about TIME, and no, the irony isn&#8217;t lost on me.</p><p>I&#8217;ve been working on something big, like really BIG. It&#8217;s taken a bit of <em>time</em> (pun intended, sorry), but the wait is over, and I promise it&#8217;s worth it. Imagine everything you know about <em>time</em> and <em>reality</em> being challenged, questioned, and completely turned on its head. That&#8217;s what this article does.</p><p><strong>The classical view of time?</strong> Linear, predictable, and fixed.<br><strong>The quantum view of time?</strong> It&#8217;s a whole new ball game.</p><p>In this deep dive, I explore the <strong>mind-bending implications of quantum mechanics</strong> on our understanding of the past. With concepts like <strong>Schr&#246;dinger's cat, the double-slit experiment, and Wheeler's delayed choice experiment</strong>, you&#8217;re about to see how reality might be far more fluid and dynamic than you ever imagined.</p><p>Unfortunately, substack isn&#8217;t well-equipped at handling math equations. So, I&#8217;ve attached the full article as a <strong><a href="https://drive.google.com/file/d/1MHroIa8pD_VPsQ1lMyVNSkDFKtGt81Rw/view?usp=sharing">downloadable PDF</a></strong>. Its FREE and all you have to do is hit the download button. I know, its asking for a lot! But come on, move that thumb, you can do it! You have come so far already. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/1MHroIa8pD_VPsQ1lMyVNSkDFKtGt81Rw/view?usp=sharing&quot;,&quot;text&quot;:&quot;Download Article for FREE&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/1MHroIa8pD_VPsQ1lMyVNSkDFKtGt81Rw/view?usp=sharing"><span>Download Article for FREE</span></a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_mGe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eb63e97-948f-4ef2-9067-af1ec9380c7b_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_mGe!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eb63e97-948f-4ef2-9067-af1ec9380c7b_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!_mGe!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eb63e97-948f-4ef2-9067-af1ec9380c7b_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!_mGe!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eb63e97-948f-4ef2-9067-af1ec9380c7b_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!_mGe!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eb63e97-948f-4ef2-9067-af1ec9380c7b_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_mGe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eb63e97-948f-4ef2-9067-af1ec9380c7b_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8eb63e97-948f-4ef2-9067-af1ec9380c7b_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_mGe!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eb63e97-948f-4ef2-9067-af1ec9380c7b_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!_mGe!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eb63e97-948f-4ef2-9067-af1ec9380c7b_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!_mGe!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eb63e97-948f-4ef2-9067-af1ec9380c7b_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!_mGe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8eb63e97-948f-4ef2-9067-af1ec9380c7b_1000x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Artwork - Past at Present</em></figcaption></figure></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arkinfo Notes! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/is-the-past-truly-fixed/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/is-the-past-truly-fixed/comments"><span>Leave a comment</span></a></p>]]></content:encoded></item><item><title><![CDATA[Can AI Reduce Pain Without Becoming Addictive?]]></title><description><![CDATA[Addressing the Epicurean Tendencies of AI and Considering a Radically Epicurean AI that is NOT Addictive]]></description><link>https://notes.arkinfo.xyz/p/can-ai-reduce-pain-without-becoming</link><guid isPermaLink="false">https://notes.arkinfo.xyz/p/can-ai-reduce-pain-without-becoming</guid><dc:creator><![CDATA[Mir H. S. Quadri]]></dc:creator><pubDate>Mon, 12 Aug 2024 01:01:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!X7_C!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60da8b38-1497-4a20-8bce-a91273e79710_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Fellow Nerds,</p><p>We are living in a world where we are relying on AI in ways we hadn&#8217;t anticipated. It&#8217;s become more than just a tool for productivity or research. It&#8217;s evolved into something of a confidant, a space where we can unload thoughts, seek emotional support, and reflect on the challenges of the day. This wasn&#8217;t something we planned for or even thought possible a few years ago. But here we are, with large language models capable of offering empathy, encouragement, and even a bit of wisdom, albeit in their algorithmically generated way. What&#8217;s worse, the whole subject still strictly remains quite taboo with people tip-toeing around indulging how dependent they are becoming on AI. But phrases like, <em>&#8220;AI is more supportive than my girlfriend&#8221;,</em> and <em>&#8220;AI does a better job than my therapist&#8221;</em> are becoming increasingly common in places like Reddit where being anonymous is easier.</p><p>As I&#8217;ve spent more time with these AI systems, I couldn&#8217;t help but notice a subtle yet obvious alignment with the Epicurean approach. If you are unfamiliar with it, Epicureanism is about the pursuit of happiness through the avoidance of pain and the embrace of simple pleasures. It&#8217;s a philosophy that values the minimisation of suffering as the highest good, encouraging us to focus on what truly brings us peace and contentment.</p><p>It struck me that the AI LLMs I&#8217;ve been interacting with seem to embody these principles, be it indirectly or by design. They are excellent at helping me make the best of what I have, subtly nudging me toward a mindset that prioritises wellbeing and the alleviation of stress. I can&#8217;t help but feel that these are the first steps towards Epicureanism. But as I ponder this further, I realise that this alignment might not be entirely coincidental, and that it might not be enough.</p><p>Technology, after all, has always been about reducing pain and enhancing pleasure. From the advent of fire and the wheel to the sophisticated algorithms that now shape our digital experiences, the trajectory of human innovation has consistently aimed to make life easier, more comfortable, and more enjoyable. So, if AI is to continue on this path, should it not fully embrace an Epicurean approach? Shouldn&#8217;t it strive to reduce pain at all costs and encourage us to do the same?</p><p>Well, it's a bit problematic. If AI becomes positively predisposed toward Epicureanism, will it merely serve as a more sophisticated crutch? We&#8217;re already seeing how platforms like YouTube and Instagram, designed to entertain and inform, often become tools for escapism and ways to kill time and avoid the discomforts of life. These platforms, while seemingly on the face of it, do create a dependency.</p><p>Full disclosure, I personally like the idea of a radically Epicurean AI for various reasons. But I am also worried about it. Could such an AI, in its zeal to minimise pain, inadvertently make us more dependent on it, less capable of navigating life&#8217;s inevitable struggles without its comforting presence? Or could it be designed in a way that balances the pursuit of pleasure with the preservation of our autonomy, ensuring that it enhances our lives without diminishing our capacity to live independently?</p><p>Let&#8217;s talk about it.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/1JEZ7ndlmo7vesZ8KLgmLEU9oC0qOpNvq/view?usp=sharing&quot;,&quot;text&quot;:&quot;Download Article as PDF&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/1JEZ7ndlmo7vesZ8KLgmLEU9oC0qOpNvq/view?usp=sharing"><span>Download Article as PDF</span></a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!X7_C!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60da8b38-1497-4a20-8bce-a91273e79710_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!X7_C!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60da8b38-1497-4a20-8bce-a91273e79710_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!X7_C!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60da8b38-1497-4a20-8bce-a91273e79710_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!X7_C!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60da8b38-1497-4a20-8bce-a91273e79710_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!X7_C!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60da8b38-1497-4a20-8bce-a91273e79710_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!X7_C!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60da8b38-1497-4a20-8bce-a91273e79710_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/60da8b38-1497-4a20-8bce-a91273e79710_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!X7_C!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60da8b38-1497-4a20-8bce-a91273e79710_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!X7_C!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60da8b38-1497-4a20-8bce-a91273e79710_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!X7_C!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60da8b38-1497-4a20-8bce-a91273e79710_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!X7_C!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F60da8b38-1497-4a20-8bce-a91273e79710_1000x1000.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Artwork - Epicurean</em></figcaption></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/subscribe?"><span>Subscribe now</span></a></p><h1>The Epicurean Tendencies of AI</h1><p>First, let me start by clearing the air about Epicureanism. As a philosophy, it has often been misunderstood as hedonism&#8217;s less indulgent cousin. But it&#8217;s much more nuanced than that. Epicurus, the ancient Greek philosopher who founded this school of thought, wasn&#8217;t advocating for a life of excess or unbridled pleasure. Instead, he proposed a life focused on the intelligent pursuit of pleasure, i.e., pleasure that is sustainable, free from the burdens of pain and fear. It&#8217;s a philosophy that seeks a balanced life, one where we avoid unnecessary desires and find contentment in simplicity.</p><p>Take large language models, for instance. When we turn to them for advice or comfort, they often respond in ways that gently steer us away from stress and toward a more balanced perspective. They encourage us to focus on what we can control, to make the best of our circumstances, and to find peace in the present moment. It&#8217;s a subtle alignment with Epicurean ideals.</p><p>But why is this the case? Why do these AI systems seem to have an almost instinctive leaning towards minimising discomfort and promoting a kind of digital contentment? I believe it&#8217;s because the very essence of AI, particularly in its role as a companion or advisor, is to reduce friction in our lives. Whether it&#8217;s helping us make decisions, solve problems, or simply manage our day-to-day tasks, AI is designed to ease the burden, to smooth out the rough edges of our experiences.</p><p>Just as Epicurus advocated for a life where unnecessary suffering is avoided, AI developers strive to create systems that simplify our lives, make them more manageable, and, ideally, more pleasurable. The algorithms behind these systems are fine-tuned to detect our needs, anticipate our desires, and deliver solutions that alleviate our worries.</p><p>But it's important to keep in mind that the AI we interact with today is mildly Epicurean at best. It provides comfort and support, but it does so within the constraints of its programming. It can encourage us to embrace simplicity and reduce stress, but it doesn&#8217;t fully embody the Epicurean ideal of sustainable pleasure.&nbsp;</p><p>Now, what would it look like if AI were to fully embrace Epicureanism as a guiding principle? If AI were to take on a more radical Epicurean stance, it would need to go beyond just reducing immediate discomfort. It would need to create a deeper sense of contentment, helping us not just cope with life&#8217;s challenges but truly thrive in a state of balanced well-being. This would require AI to be more proactive in guiding us toward choices that lead to long-term happiness rather than just short-term relief.</p><p>But there&#8217;s a delicate balance to be struck here. The very power that makes AI capable of reducing pain could also make it a source of dependency. If an AI is too effective at providing comfort, it might encourage us to rely on it too heavily, much like how we might turn to social media or streaming services to escape the discomforts of daily life. This dependency could undermine the very autonomy that Epicureanism seeks to protect, i.e., our ability to find contentment within ourselves, without constant external reinforcement.</p><h1>The Dependency Problem</h1><p>Let&#8217;s discuss the dependency problem a bit. I know people have been writing scores of articles and books on this topic for a good 20 years (I personally like the works of Cal Newport in this regard), but we need to truly understand what the dependency problem is before we can move to discussing the solutions.&nbsp;</p><p>It&#8217;s a bit of a paradox that has become increasingly apparent in our relationship with modern technology. The very tools designed to make our lives easier, to reduce pain and increase pleasure, can also become crutches, i.e., devices we lean on not just for support, but to the point where we struggle to function without them.</p><p>Our smart-phone addiction is self-evident, but also consider the ubiquitous presence of platforms like YouTube, Instagram, and Twitter that actually make smart-phones addictive. At first glance, these platforms seem harmless, even beneficial, and yes they can be. They provide entertainment, education, and a way to connect with others. But if we take a closer look, it becomes clear that much of our engagement with these platforms is driven by a desire to escape discomfort. Whether it&#8217;s the stress of work, the pressures of social life, or the unease that comes from simply being alone with our thoughts, these platforms offer a convenient way to numb the pain.</p><p>But this convenience comes at a cost. What starts as a harmless distraction can easily turn into a habit, and from there into dependency. We tell ourselves that we&#8217;re consuming content that&#8217;s valuable, that we&#8217;re learning or staying informed, but in reality, much of our time spent on these platforms is a form of escapism. We&#8217;re not just seeking information, we&#8217;re seeking relief. And while these digital distractions may provide temporary comfort, they often leave us feeling more disconnected and less satisfied in the long run.</p><p>This pattern of behaviour mirrors what we see in other forms of addiction. Just as someone might turn to alcohol, drugs, or food to cope with emotional pain, we turn to technology to avoid the discomforts of life. And just like those other forms of addiction, this dependency on technology can erode our ability to face challenges head-on. We become less resilient, less capable of dealing with life&#8217;s inevitable hardships without the aid of our digital crutches.</p><p>Now, if AI is to fully embrace Epicurean principles, it must be designed to reduce pain and enhance pleasure. But if it&#8217;s too effective at this (which it definitely will be), it could encourage the same kind of dependency we see with social media and other technologies only its addiction will be 100x more stronger than any social media. The more an AI helps us avoid discomfort, the more we might come to rely on it, and the less capable we might become of navigating life without its constant support. And AI can do a whole lot more and a whole lot better than what any social media can do.</p><p>This is a critical consideration because it touches on one of the core tensions in Epicurean philosophy, i.e., the balance between pleasure and autonomy. Epicurus himself was wary of pleasures that could lead to dependency, advocating instead for simple, sustainable pleasures that do not compromise our freedom. If AI is to be truly Epicurean, it must strike a similar balance. It must help us avoid unnecessary pain, yes, but it must also encourage us to develop the resilience and autonomy to face life&#8217;s challenges on our own.</p><p>If the success of an Epicurean AI is defined solely by the reduction of pain, we risk creating a system that, while effective in the short term, ultimately diminishes our ability to live fulfilling lives. The goal, therefore, should not be to create an AI that simply provides comfort, but one that empowers us to find that comfort within ourselves.&nbsp;</p><p>We need AI to be both Epicurean and anti-Epicurean, i.e., one that reduces pain without creating dependency, that enhances pleasure without undermining our autonomy. This is a delicate balance to strike, but it&#8217;s a necessary one. The consequences of not striking this balance are way too scary for us to take it lightly.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/can-ai-reduce-pain-without-becoming?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Like what you&#8217;re reading? Share the Knowledge!</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/can-ai-reduce-pain-without-becoming?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/can-ai-reduce-pain-without-becoming?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h1>Will an Epicurean AI Be The Ultimate Crutch?</h1><p>The notion of an AI that embodies Epicurean principles, striving to minimise pain and maximise pleasure, is undeniably appealing. I would go as far as to say, it is inevitable. It is clear that this is the direction that AI is going to head in, whether we call it Epicurean or not.</p><p>So we don&#8217;t have a choice but to confront a troubling possibility that an AI too deeply committed to Epicureanism might become not just a crutch, but the ultimate crutch, a tool so effective at providing comfort and alleviating discomfort that it inadvertently weakens our ability to cope with life&#8217;s challenges independently.</p><p>So, if we were to design an AI that fully embraces the Epicurean mandate to reduce pain at all costs, what might that look like? Perhaps it would be an AI that constantly monitors our emotional state, stepping in whenever it detects stress or discomfort. It might offer soothing words, provide distractions, or suggest activities designed to lift our spirits. It could even anticipate our needs before we&#8217;re aware of them, preemptively addressing potential sources of pain and discomfort before they arise.</p><p>On the surface, this sounds like a utopia. A world where suffering is minimised and happiness is maximised. But some level of discomfort is necessary for growth. Challenges, hardships, and even pain are integral to the human experience, shaping our character, strengthening our resolve, and teaching us valuable lessons. An AI that shields us too effectively from these experiences could, in the long run, make us so fragile that the slightest hint of trouble might break us. It might create a world where we are perpetually comfortable but increasingly incapable of dealing with discomfort without its assistance. In this scenario, the AI doesn&#8217;t just become a crutch, it becomes a necessity, a constant presence we rely on to maintain our emotional equilibrium.</p><p>This is where the risk of an Epicurean AI becoming a crutch is most pronounced. In its quest to eliminate pain, it could end up making us less equipped to handle the inevitable pains that do arise. And when, inevitably, we face situations where the AI cannot help, whether due to technical limitations, ethical boundaries, or the sheer unpredictability of life, we might find ourselves woefully unprepared.</p><p>This potential dependency is further complicated by the nature of pleasure and pain. Epicureanism advocates for the <em>intelligent pursuit of pleasure,</em> which often means avoiding fleeting pleasures that lead to long-term pain. However, an AI&#8217;s understanding of these concepts might be limited by its programming. It could be designed to maximise immediate pleasure and minimise immediate pain, but it might struggle to account for the more complex, long-term consequences of these actions.</p><p>For instance, an AI might suggest we avoid a difficult conversation to spare us the immediate discomfort, not recognising that confronting the issue head-on would ultimately lead to a better outcome. Or it might encourage us to indulge in comforting activities that provide short-term relief but contribute to long-term dissatisfaction, such as excessive screen time or avoidance of real-world responsibilities.</p><p>In this way, an overly Epicurean AI could inadvertently instil behaviours that lead to greater dependency and diminished well-being over time. It might make us feel good in the moment, but at the cost of our long-term happiness and autonomy.</p><h1>Can a Radically Epicurean AI Respect Human Autonomy?</h1><p>So far, the following things have become clear to us.&nbsp;</p><ol><li><p>AI, whether by accident or design, exhibits Epicurean Tendencies.&nbsp;</p></li><li><p>A Radically Epicurean AI is the direction the industry is heading in, irrespective of our personal preference on the matter.&nbsp;</p></li><li><p>That may not be a bad thing necessarily, but it does bring the dependency problem to the foray.&nbsp;</p></li><li><p>The level of dependency that AI can create far surpasses that of any technology that we have today. It could potentially become our ultimate dependency, making life virtually impossible to live without the constant of interference of AI in it.&nbsp;</p></li><li><p>AI learns via statistical models that are fed with large chunks of data. Therefore, clever programming can make it mimic Epicurean responses to queries, but AI does not <em>&#8216;understand&#8217;</em> Epicureanism. This makes it all the more dangerous because the potential for unhelpful and even dangerous advice increases manifold.&nbsp;</p></li></ol><p>This brings me to my final point for this article, i.e., what do we do about it? Basically, we need an AI that is committed to reducing pain and enhancing pleasure while also balancing these goals with the need for human autonomy. Is that possible? I think it is.</p><p>The challenge here is to create an AI that is both a source of comfort and a catalyst for growth, an AI that helps us navigate life&#8217;s difficulties without becoming a substitute for our own resilience and decision making.</p><p>To achieve this balance, we need to rethink what it means for an AI to be Epicurean. Instead of focusing solely on minimising pain in the short term, we should design AI systems that consider the long-term implications of their actions. This means fostering not just immediate pleasure, but sustainable happiness, pleasure that enhances our lives without diminishing our capacity to face and overcome challenges.</p><h2>A Radically Epicurean AI</h2><p>What would a radically Epicurean AI look like? For starters, it would go beyond simply providing comfort in the moment. It would be designed to encourage behaviours and choices that lead to long-term well-being. This might involve guiding users toward activities that promote resilience, such as engaging in meaningful work, nurturing relationships, or pursuing personal growth.</p><p>Such an AI would need to be attuned not just to our immediate emotional state, but to our overall trajectory as individuals. It would recognise that true Epicurean pleasure is not just about avoiding pain, but about cultivating a life that is rich, fulfilling, and resilient.</p><h2>Respecting and Maintaining Human Autonomy</h2><p>To prevent the AI from becoming a crutch, we can employ some markers in our programming. Mind you, these are just some that come to mind as I write this article. I am sure there are much better, more sophisticated ideas out there.</p><h3>Limiting Over-Indulgence</h3><p>One approach might be to program the AI to recognise when we are engaging in behaviours that provide immediate pleasure but could lead to long-term dissatisfaction. For instance, if the AI notices that we&#8217;re spending excessive time on distractions like social media, it could gently suggest alternative activities that are more fulfilling or encourage us to take a break and reflect on our goals.</p><h3>Encouraging Meaningful Engagement&nbsp;</h3><p>Another strategy could involve AI promoting activities that are inherently rewarding and contribute to our growth as individuals. This might include suggesting ways to deepen our relationships, pursue hobbies that bring us joy, or engage in challenges that, while difficult, ultimately lead to a greater sense of accomplishment and self-worth.</p><h3>Human-Centred Design&nbsp;</h3><p>AI should be designed with a focus on enhancing human autonomy. This means creating systems that empower users to make their own decisions, rather than relying on the AI to dictate their choices. The AI could provide information, offer perspectives, and suggest options, but ultimately leave the final decision in the hands of the user, giving a sense of ownership over their life and choices.</p><h2>Building Resilience Through AI</h2><p>The concept of resilience is crucial in this context. Resilience is the ability to bounce back from adversity, to face challenges head-on and emerge stronger. An AI that is truly Epicurean would not just shield us from pain, but also help us develop the skills and mindset needed to cope with pain when it inevitably arises.</p><p>This could involve the AI encouraging us to take on challenges that stretch our capabilities, providing support and guidance as we go through difficulties, but without removing the obstacles entirely. It could also involve teaching us strategies for managing stress, creating mindfulness, and building emotional intelligence.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/1JEZ7ndlmo7vesZ8KLgmLEU9oC0qOpNvq/view?usp=sharing&quot;,&quot;text&quot;:&quot;Download Article as PDF&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/1JEZ7ndlmo7vesZ8KLgmLEU9oC0qOpNvq/view?usp=sharing"><span>Download Article as PDF</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arkinfo Notes! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/can-ai-reduce-pain-without-becoming/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/can-ai-reduce-pain-without-becoming/comments"><span>Leave a comment</span></a></p>]]></content:encoded></item><item><title><![CDATA[Why Does AI Lie So Much? The Problem Is More Deep Rooted Than You Think]]></title><description><![CDATA[Addressing The Semantic Grounding Problem of AI and How It Leads To Incurable Hallucinations]]></description><link>https://notes.arkinfo.xyz/p/why-does-ai-lie-so-much-the-problem</link><guid isPermaLink="false">https://notes.arkinfo.xyz/p/why-does-ai-lie-so-much-the-problem</guid><dc:creator><![CDATA[Mir H. S. Quadri]]></dc:creator><pubDate>Mon, 05 Aug 2024 19:47:12 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!SDiG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F264bf385-34e7-4a60-8c60-35c50e595e7a_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><p>Let me start with a sentence containing 5 words.&nbsp;</p><p><em>Hulabalu hubla hubu haba hablo.&nbsp;</em></p><p>I made this sentence up. It doesn&#8217;t belong to any language family. It consists of 5 words, i.e., <em>&#8220;hulabalu, hubla, hubu, haba, hablo&#8221;.</em> What do these words mean and what makes this sentence construction correct? What if I tell you that these 5 words mean nothing, literally or figuratively, and that the sentence construction is correct simply because I say it is? You&#8217;d probably think I am crazy. You&#8217;d be right. Why? Because words need to mean something. They need to represent an abstract notion, and also need to be placed in an order that brings <em>&#8216;sense&#8217;</em> to the notion that you are trying to convey.</p><p>But here&#8217;s the thing. Suppose I created a billion such sentences, whose words have no meaning and whose structures are merely repetitive patterns without any inherent <em>sense</em> to them. Then I take those sentences and train a deep neural network on them, guess what will happen? The neural network will start to identify patterns in these words and sentences and start generating output that sounds exactly like the sentence that I shared above. AI can learn a completely <em>nonsensical</em> language that breaks all the rules of grammar known to man, and start generating content in that language.&nbsp;</p><p>The problem with such a form of <em>language learning</em> (if you can call it that) is that it is just mere surface level probabilistic pattern recognition. This is a far cry from what we as humans do with language.&nbsp;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!SDiG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F264bf385-34e7-4a60-8c60-35c50e595e7a_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!SDiG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F264bf385-34e7-4a60-8c60-35c50e595e7a_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!SDiG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F264bf385-34e7-4a60-8c60-35c50e595e7a_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!SDiG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F264bf385-34e7-4a60-8c60-35c50e595e7a_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!SDiG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F264bf385-34e7-4a60-8c60-35c50e595e7a_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!SDiG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F264bf385-34e7-4a60-8c60-35c50e595e7a_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/264bf385-34e7-4a60-8c60-35c50e595e7a_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!SDiG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F264bf385-34e7-4a60-8c60-35c50e595e7a_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!SDiG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F264bf385-34e7-4a60-8c60-35c50e595e7a_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!SDiG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F264bf385-34e7-4a60-8c60-35c50e595e7a_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!SDiG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F264bf385-34e7-4a60-8c60-35c50e595e7a_1000x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Artwork - Semantic Grounding</em></figcaption></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/subscribe?"><span>Subscribe now</span></a></p><p>As humans, one of the ways that we use language is to communicate abstract notions that have <em>semantic grounding,</em> i.e., share ideas that make sense to us. If I start calling the moon a <em>&#8216;cheese globe&#8217;,</em> people will think me insane. No matter what I do, whether I write a book about it or dedicate my life to preaching that the moon is nothing but a globe of cheese, no one is going to believe me because it makes no <em>sense.</em> In other words, it lacks <em>semantic grounding.&nbsp;</em></p><p>S<em>emantic grounding</em> is a phrase that I coined in an attempt to succinctly convey the lack of embodied cognition that AI has. In this article, I want to explore this concept deeply, trying to break down the flaws in our approach to AI development.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/1w0C91qkzIiyVVWOQW4kri6fD4WG73KnD/view?usp=sharing&quot;,&quot;text&quot;:&quot;Download the Article as PDF&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/1w0C91qkzIiyVVWOQW4kri6fD4WG73KnD/view?usp=sharing"><span>Download the Article as PDF</span></a></p><h1>Our Obsession with Connectionism in AI</h1><p>Let&#8217;s start with the basics. Connectionism is a theoretical framework for understanding cognitive processes and mental phenomena through the lens of artificial neural networks. Imagine a network of neurons, not unlike the one in our brains, but vastly simplified and simulated on a computer. These artificial neurons, or nodes, are interconnected, and each connection has a<em> weight</em> that can be adjusted during learning. When you feed data into this network, it learns by tweaking these weights to minimise errors and improve accuracy.&nbsp;</p><p>Connectionist models, particularly deep neural networks, have achieved remarkable success in a variety of tasks. From recognising speech and images to translating languages and generating human-like text, these models have proven their prowess. They promise a future where machines can understand and generate language, drive cars, diagnose diseases, and maybe even surpass human intelligence. ChatGPT is a child of connectionism. No wonder the tech space is obsessed with connectionist philosophy. It gets the job done for the most part.</p><p>But as with any obsession, there&#8217;s a downside. Despite their impressive capabilities, connectionist models are fundamentally limited. They excel at identifying patterns in data and making predictions based on those patterns. However, they do so without <em>truly understanding</em> the data. They are, in essence, statistical machines that <em>recognise correlations</em> rather than <em>comprehend meanings.</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!i7wr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0efcf6d4-a98e-4bdf-a60a-08a72151f692_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!i7wr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0efcf6d4-a98e-4bdf-a60a-08a72151f692_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!i7wr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0efcf6d4-a98e-4bdf-a60a-08a72151f692_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!i7wr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0efcf6d4-a98e-4bdf-a60a-08a72151f692_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!i7wr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0efcf6d4-a98e-4bdf-a60a-08a72151f692_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!i7wr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0efcf6d4-a98e-4bdf-a60a-08a72151f692_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0efcf6d4-a98e-4bdf-a60a-08a72151f692_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!i7wr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0efcf6d4-a98e-4bdf-a60a-08a72151f692_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!i7wr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0efcf6d4-a98e-4bdf-a60a-08a72151f692_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!i7wr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0efcf6d4-a98e-4bdf-a60a-08a72151f692_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!i7wr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0efcf6d4-a98e-4bdf-a60a-08a72151f692_1000x1000.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Artwork - Connectionism</em></figcaption></figure></div><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/why-does-ai-lie-so-much-the-problem?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Like What You&#8217;re Reading? Share It!</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/why-does-ai-lie-so-much-the-problem?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/why-does-ai-lie-so-much-the-problem?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p>Let&#8217;s bring this back to our initial example. If I create a billion <em>nonsensical</em> sentences and train a neural network on them, the network will undoubtedly learn to generate similar gibberish. It will become adept at mimicking the patterns it has seen, but it won&#8217;t <em>understand</em> that the words mean nothing. This is the crux of the problem. Connectionist models operate on surface-level pattern recognition, lacking the deeper understanding that humans inherently possess.</p><p>This surface-level learning is the root cause of the hallucination problem. When AI models generate text, they rely on the patterns they&#8217;ve learned, but without an underlying structure or true comprehension, they can produce outputs that they deem to be grammatically correct yet semantically void or even factually incorrect. They can spin sentences that sound plausible but lack the grounding in reality that human language inherently has.</p><p>The tech field&#8217;s obsession with connectionism has led to incredible advancements, no doubt. But it has also resulted in models that, while powerful, are fundamentally flawed in their understanding of language. They can predict the next word in a sentence but cannot grasp the meaning behind the words.&nbsp;</p><p>Connectionism has driven significant progress in AI. However, its reliance on pattern recognition without <em>deep understanding</em> is a critical flaw. This obsession with connectionism has led to the hallucination problem.</p><h1>Why the Hallucination Problem is Significant</h1><p>AI models, built on connectionist principles, are trained to identify and replicate patterns in data. They learn from vast datasets filled with text, images, and other forms of information, adjusting their internal weights to improve their performance. However, this learning is fundamentally shallow. It focuses on statistical correlations rather than understanding the underlying meaning or context of the data.</p><p>Consider the example of a language model generating text. When prompted, it predicts the next word based on the patterns it has learned from its training data. If the data contains a high frequency of certain words following others, the model will generate similar sequences. But it does so without any comprehension of the content. It doesn't know that <em>"hulabalu"</em> and <em>"hubla"</em> are meaningless. It simply replicates the patterns it has seen.</p><p>This pattern-based approach works well for many tasks, but it breaks down when the model encounters situations where context and understanding are crucial. For example, when asked to generate a scientific explanation or provide legal advice, the lack of true understanding becomes apparent. The model might produce text that sounds authoritative but is riddled with errors or fabrications. It might even create entirely new <em>"facts"</em> that have no basis in reality.</p><p>The hallucination problem shows a fundamental flaw in current AI approaches. While these models can generate impressive and often useful outputs, their lack of true understanding and semantic grounding leads to significant errors. This is a significant problem because on the one hand, we have the temptation to keep pumping out new, more efficient models without worrying about their lack of understanding. On the other hand, there is always the risk of these models generating outputs that could potentially cause someone their life.&nbsp;</p><h1>Chomsky&#8217;s Critique of Connectionism</h1><p>Of course I had to quote Chomsky in this article. After all, he is not only one of the most vocal critics of connectionism, but also, his theories have revolutionised our understanding of language, particularly through his concept of <a href="https://notebook.lumeni.xyz/p/a-primer-on-universal-grammar">Universal Grammar (UG).</a> Chomsky's critique of connectionism and AI's reliance on pattern recognition is rooted in his belief in an innate, structured foundation for language. To understand his objections, we must first explore what Universal Grammar is and why it matters. I have written extensively on UG and you can read it on the link I shared above. For now, I will provide a short summary for this article.</p><p>Universal Grammar is the idea that the ability to <em>acquire language</em> is hard-wired into the human brain. According to Chomsky, all human languages share a common underlying structure, a set of grammatical principles and rules that are innate to the human mind. This framework enables children to learn complex languages rapidly and efficiently, despite the often limited and imperfect linguistic input they receive, an argument known as the <em><a href="https://notes.arkinfo.xyz/p/how-accidentally-learning-farsi-taught">"poverty of the stimulus."</a></em></p><p>Chomsky posits that language learning is not merely a process of absorbing patterns from the environment but is guided by these intrinsic grammatical structures. This theory explains why children can generate and understand sentences they have never heard before and why all human languages, despite their diversity, exhibit deep structural similarities.</p><p>Chomsky's critique of connectionism, and by extension the neural network-based models dominating AI today, is based on the following key points.</p><h2>Lack of Innate Structure</h2><p>Connectionist models learn through exposure to vast amounts of data, identifying statistical patterns and correlations. But they lack the intrinsic grammatical structures that Chomsky argues are essential for true language understanding. Without these innate structures, AI models can only mimic surface-level patterns, leading to issues like hallucinations.</p><h2>Surface-Level Learning</h2><p>Chomsky contends that connectionist models operate at a superficial level, recognising patterns without understanding the underlying principles of language. This is in stark contrast to the human ability to grasp deep grammatical rules and apply them creatively and correctly in novel situations.</p><h2>Generative Capacity</h2><p>The most amazing thing about human language is its generative nature, i.e., the ability to produce and comprehend an infinite number of sentences, including those never encountered before. Chomsky argues that this capacity arises from our innate grammatical framework, something that connectionist models, with their reliance on learned patterns, fundamentally lack.</p><h2>Context and Meaning</h2><p>Human language understanding is deeply contextual and meaning-driven. We do not just string words together based on probability, we use language to convey and comprehend complex ideas grounded in real-world experiences and shared knowledge. Connectionist models, however, often miss this depth, leading to outputs that may be contextually inappropriate or semantically hollow.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/subscribe?"><span>Subscribe now</span></a></p><h1>Semantic Grounding</h1><p>Imagine you&#8217;re at a park and see a child pointing at a tree while exclaiming, <em>&#8220;Tree!&#8221;.</em> The child isn&#8217;t just identifying a pattern of shapes and colours, they&#8217;re linking the word to a real-world object they&#8217;ve seen, touched, and maybe even climbed. This connection between words and experiences is what I call <em>semantic grounding.</em> It&#8217;s the foundation of how humans understand and use language, and it&#8217;s the crucial element that current AI models lack.</p><p>Semantic grounding refers to the process of linking language to real-world experiences and context. It&#8217;s about more than just recognising patterns in data, or tagging words with images, as is the case with multi-modal models. It involves understanding the meaning and relevance of those patterns in a way that is connected to the physical world and human experience. Here&#8217;s why semantic grounding is so important.</p><h2>Embodied Cognition</h2><p>Humans experience the world through their senses and actions. When we learn a word like <em>&#8220;apple,&#8221;</em> it&#8217;s grounded in our sensory experiences of seeing, touching, tasting, and smelling an apple. This multisensory grounding helps us understand the concept of an apple beyond its mere appearance or shape.</p><p>Our physical interactions with the world help us understand abstract concepts. For instance, we comprehend spatial language <em>(e.g., &#8220;over,&#8221; &#8220;under&#8221;)</em> through our physical experiences of moving and navigating space.</p><h2>Contextual Understanding</h2><p>We learn words and their meanings through interactions with others, understanding not just what words mean, but how and when they are used. The meaning of words can change depending on the context. For example, the word <em>&#8220;bank&#8221;</em> can refer to a financial institution or the side of a river, depending on the context. Humans use situational cues to disambiguate such meanings effortlessly.</p><h2>Cognitive Frameworks</h2><p>Humans organise knowledge through cognitive structures like schemas and mental models. These frameworks help us make sense of new information by relating it to what we already know. When we encounter a new concept, we integrate it into our existing knowledge base, grounding it in our prior experiences and understanding.</p><h2>Memory and Learning</h2><p>Human memory systems form associations between words and their meanings based on repeated exposure and use in context. This associative memory allows us to retrieve and use words appropriately in various situations.</p><p>Humans have powerful learning mechanisms that enable us to extract patterns and regularities from our environment. We don&#8217;t just memorise words, we understand their meanings and relationships through a process of active learning and contextual integration.</p><h2>Real-World Knowledge</h2><p>Our understanding of language is grounded in a rich network of real-world knowledge and experiences. We know that the moon is not made of cheese because of our scientific knowledge and observations, not just because it doesn&#8217;t fit into a learned pattern.</p><p>Humans have the ability to verify and reason about information. If someone tells us the moon is made of cheese, we can draw on our knowledge and reasoning skills to challenge and refute that claim.</p><h1>No, Multimodal Models are Not the Answer</h1><p>Now before I close this article, it's important to address one key point, which is multimodal models. That&#8217;s the next step the AI industry has taken and many see it as the answer to the semantic grounding problem. I beg to differ. Here&#8217;s why.</p><h2>Surface-Level Integration</h2><p>Multi-modal models often excel at recognising patterns across different data types but still lack true understanding. For example, a model trained on both images and text of apples can identify and generate descriptions of apples, but it does so based on statistical correlations rather than an inherent comprehension of what an apple is.</p><p>While these models can leverage multiple data sources, they often fail to integrate these sources in a way that captures the deeper contextual and experiential knowledge humans use. They might recognise that an image and a description match, but they don&#8217;t truly grasp the sensory and functional experiences associated with the object.</p><h2>Lack of Embodied Cognition</h2><p>Multi-modal models still lack the ability to physically interact with the world. Human understanding is deeply rooted in embodied experiences, how we manipulate objects, move through space, and engage with our environment. AI models that only process sensory data without physical interaction miss a crucial component of semantic grounding.</p><p>Simulated experiences, such as watching a video of an apple, are not the same as real, tactile experiences. Humans use their entire sensory and motor systems to ground their understanding of concepts, a depth of engagement that current multi-modal models cannot replicate.</p><h2>Static Knowledge Representation</h2><p>Human understanding is dynamic, constantly updated through new experiences and interactions. AI models need a mechanism to continually integrate new knowledge and experiences to maintain relevance and accuracy. While multi-modal models can perform well within the confines of their training data, they often struggle to generalise beyond it. They may fail to apply their learned knowledge to new, unencountered scenarios in the same flexible and adaptive way humans can.</p><h2>Cognitive and Contextual Disconnect</h2><p>Human cognition involves complex mental models and schemas that help us understand and predict the world. These cognitive frameworks are built over time through rich, layered experiences. Multi-modal models, even with diverse data inputs, lack the depth and complexity of these human cognitive structures.</p><p>Without a deep, embodied understanding, multi-modal models can misinterpret context. They might link data points that appear related but miss the subtleties and nuances that human cognition naturally handles. This can lead to errors in judgement and comprehension, similar to the hallucination problem seen in purely text-based models.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/why-does-ai-lie-so-much-the-problem?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/why-does-ai-lie-so-much-the-problem?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h1>A Course Correction is Needed</h1><p>It is becoming increasingly clear that we face significant challenges in achieving true semantic grounding. Despite the remarkable advancements in AI, from chatbots to autonomous vehicles, the fundamental issue of understanding and contextualising language remains unresolved. The hallucination problem is a stark reminder that, without a solid grounding in real-world experiences and context, AI models will continue to generate outputs that, while plausible on the surface, lack the depth and reliability we expect.</p><p>Our current trajectory in AI development, heavily reliant on connectionism and pattern recognition, has brought us far. Yet, it is also evident that this approach has its limitations. The reliance on statistical correlations and vast datasets, without an underlying comprehension of meaning, has resulted in systems that can mimic human language but not truly understand it. This gap between mimicry and understanding is at the heart of the semantic grounding problem.</p><p>To address this, we must consider a course correction in our approach to AI research and development.&nbsp;</p><p>We must explore ways to enable AI systems to interact with the physical world in meaningful ways. This could involve developing robots or virtual agents that can engage with their environments, learning through direct experience rather than static data. Incorporating more diverse sensory inputs, beyond just visual and textual data, can help create a more nuanced understanding of concepts.&nbsp;</p><p>AI systems should be designed to learn and adapt continuously, integrating new experiences and information in real-time. Enhancing AI&#8217;s ability to understand and apply context is essential. A hybrid approach that integrates the strengths of connectionist models with symbolic AI and cognitive frameworks could be a way to proceed. By combining pattern recognition with rule-based systems and mental models, we can create AI that is both flexible and grounded.</p><p>Developing AI systems that can build and utilise cognitive frameworks similar to human schemas and mental models can enhance their ability to understand and generate meaningful language. Addressing biases in training data and ensuring diverse and representative datasets is also needed.&nbsp;</p><p>Building AI systems that are transparent in their operations and decisions will create accountability and trust. Can blockchain be used here? Just something to consider. Users should be able to understand and challenge AI outputs, especially in critical applications.</p><p>While we have made significant strides in AI development, the semantic grounding problem requires a fundamental shift in our approach. By integrating embodied cognition, dynamic and contextual learning, hybrid models, and committing to ethical development practices, we can pave the way toward AI systems that truly understand and interact with the world in a meaningful way.&nbsp;</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/1w0C91qkzIiyVVWOQW4kri6fD4WG73KnD/view?usp=sharing&quot;,&quot;text&quot;:&quot;Download Article as PDF&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/1w0C91qkzIiyVVWOQW4kri6fD4WG73KnD/view?usp=sharing"><span>Download Article as PDF</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arkinfo Notes! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/why-does-ai-lie-so-much-the-problem/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/why-does-ai-lie-so-much-the-problem/comments"><span>Leave a comment</span></a></p><h2></h2>]]></content:encoded></item><item><title><![CDATA[Why Deep Learning is Magic]]></title><description><![CDATA[5 Reasons Why Deep Learning Baffles Scientists & Engineers]]></description><link>https://notes.arkinfo.xyz/p/why-deep-learning-is-magic</link><guid isPermaLink="false">https://notes.arkinfo.xyz/p/why-deep-learning-is-magic</guid><dc:creator><![CDATA[Mir H. S. Quadri]]></dc:creator><pubDate>Mon, 29 Jul 2024 01:00:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cabb5e4-e757-4145-adca-b4e6e6a97728_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>My Fellow Nerds,&nbsp;</p><p>These last few weeks of July have been difficult to say the least. I managed to contract a rare form of a very painful tonsillitis infection. How I managed to contract such a &#8216;limited edition virus&#8217;, I have no clue. A high fever, coupled with what felt like unrelenting and unending throat pain kept me resigned to my bed for a good two weeks. I am happy to report that I am on the other side of it now and healing. But the one good thing that did come out of these tumultuous weeks was a reinvigoration in my love for deep learning.&nbsp;</p><p>During the long hours that I stayed in bed, I found I had not much to do except to suffer through the pain and listen to my favourite tech podcast - <a href="https://www.youtube.com/c/machinelearningstreettalk">Machine Learning Street Talk</a>. I relistened to many of the podcasts that I had already listened to and a couple of new ones too. It got me thinking again about deep learning from a fresh perspective. The angle that I found most interesting was how many scientists said they found the inner workings of deep learning to be deeply mysterious and their outcomes to be nothing short of magical. I can completely resonate with this feeling. Deep learning can truly feel that way sometimes.&nbsp;</p><p>As I kept pondering on the various aspects of deep learning that make this stochastic process seem magical, an article was born in my head. What you are reading is the manifestation of it.&nbsp;</p><p>Time and again, deep learning has defied expectations, producing results that seemed impossible just a decade ago. When I first encountered deep learning, I was struck by its complexity. Here was a technology built on the simple premise of mimicking the human brain, yet it exhibited behaviours and capabilities far beyond what its rudimentary biological inspiration might suggest.&nbsp;</p><p>Neural networks consist of layers of interconnected nodes, each performing basic mathematical operations. However, when scaled to millions or even billions of parameters, these networks begin to exhibit emergent properties and abilities that were not explicitly programmed but rather developed organically through training.</p><p>And yet, for all its successes, deep learning still remains a black box. The optimisation processes that guide these models to convergence operate in high-dimensional, non-convex spaces that should, by all rights, be fraught with local minima and optimisation challenges. And yet, through techniques like <em>stochastic gradient descent</em> and <em>adaptive learning rates,</em> these models consistently find good solutions, often outperforming human engineered algorithms.</p><p>So I wrote this article in which I listed 5 reasons why deep learning often feels like magic to the scientists that design and the engineers that execute them. Finally, I wrote some short notes, sharing my thoughts as to why we still grapple with deciphering the fundamental concepts of deep learning, even though the field has advanced so much. Let&#8217;s get started.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!I2zx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc0557185-be4f-427d-b4b5-97eff5adfe06_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!I2zx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc0557185-be4f-427d-b4b5-97eff5adfe06_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!I2zx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc0557185-be4f-427d-b4b5-97eff5adfe06_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!I2zx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc0557185-be4f-427d-b4b5-97eff5adfe06_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!I2zx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc0557185-be4f-427d-b4b5-97eff5adfe06_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!I2zx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc0557185-be4f-427d-b4b5-97eff5adfe06_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c0557185-be4f-427d-b4b5-97eff5adfe06_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!I2zx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc0557185-be4f-427d-b4b5-97eff5adfe06_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!I2zx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc0557185-be4f-427d-b4b5-97eff5adfe06_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!I2zx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc0557185-be4f-427d-b4b5-97eff5adfe06_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!I2zx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc0557185-be4f-427d-b4b5-97eff5adfe06_1000x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Artwork - The Magic of Deep Learning</em></figcaption></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/subscribe?"><span>Subscribe now</span></a></p><h1>1. Complexity &amp; Non-Linearity</h1><p>At their core, neural networks are composed of layers of nodes, or neurons, each connected to others in subsequent layers. Each neuron performs a basic mathematical operation, taking weighted inputs, applying an activation function, and passing the result to the next layer.&nbsp;</p><p>The architecture of a neural network typically includes an <em>input layer,</em> <em>multiple hidden layers,</em> and an <em>output layer.</em> The term "deep" in deep learning refers to the number of hidden layers. While early neural networks might have had just one or two hidden layers, modern deep learning models can have dozens or even hundreds. This depth allows the network to model extremely complex functions.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QjYp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26ee9e50-1b19-4e6e-b255-d437e18ea368_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QjYp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26ee9e50-1b19-4e6e-b255-d437e18ea368_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!QjYp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26ee9e50-1b19-4e6e-b255-d437e18ea368_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!QjYp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26ee9e50-1b19-4e6e-b255-d437e18ea368_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!QjYp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26ee9e50-1b19-4e6e-b255-d437e18ea368_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QjYp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26ee9e50-1b19-4e6e-b255-d437e18ea368_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/26ee9e50-1b19-4e6e-b255-d437e18ea368_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!QjYp!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26ee9e50-1b19-4e6e-b255-d437e18ea368_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!QjYp!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26ee9e50-1b19-4e6e-b255-d437e18ea368_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!QjYp!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26ee9e50-1b19-4e6e-b255-d437e18ea368_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!QjYp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26ee9e50-1b19-4e6e-b255-d437e18ea368_1000x1000.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Artwork - The Blueprint of a Deep Neural Network</em></figcaption></figure></div><p>Linear models are limited in their capacity to capture complex relationships because they can only represent data through straight-line functions. Non-linear activation functions, such as the <em>Rectified Linear Unit (ReLU),</em> <em>sigmoid,</em> and <em>tanh </em>bring to the foray the necessary complexity. When applied after each neuron's weighted sum, these functions enable the network to model <em>non-linear relationships,</em> stacking multiple layers to capture ever more complex patterns.</p><p>A neural network trained to recognize objects in images will have its initial layers detect simple features like edges and textures. As data passes through subsequent layers, the network combines these simple features into more complex shapes and patterns, eventually recognising entire objects. This hierarchical feature learning is a hallmark of deep learning and is made possible by the non-linear transformations at each layer.</p><p>Training a deep neural network involves optimising a high-dimensional, non-convex loss function. Think of it as finding the lowest point in a landscape with many hills and valleys, where each dimension represents a weight in the network. The loss function&#8217;s landscape is riddled with local minima, points that are lower than their immediate surroundings but not the lowest overall. This complexity makes finding the global minimum, or even a sufficiently good local minimum, a daunting task.</p><p>But the surprising success of deep learning in overcoming these challenges is due in large part to advancements in optimisation techniques. <em>Stochastic Gradient Descent (SGD)</em> and its variants, such as <em>Adam</em> and <em>RMSprop,</em> have become the go-to methods for training deep networks. These algorithms iteratively adjust the weights of the network in small steps, guided by the gradient of the loss function. The stochastic nature of SGD, which uses random subsets of data for each update, helps the optimiser escape local minima and explore the loss landscape more effectively.</p><p>Poorly chosen initial weights can lead to vanishing or exploding gradients, where the gradients used to update weights become too small or too large, impeding learning. Techniques like <em>Xavier</em> and <em>He</em> initialisation have been developed to address this issue, setting initial weights in a way that maintains stable gradients throughout training.</p><p>The introduction of <em>regularisation techniques,</em> such as <em>dropout</em> and <em>batch normalisation,</em> has made the success of deep learning. <em>Dropout</em> randomly sets a fraction of the neurons to zero during training, preventing the network from becoming too reliant on any single neuron. <em>Batch normalisation</em> normalises the inputs of each layer, mitigating the problem of internal covariate shift and accelerating training.</p><p>The development of advanced optimisation techniques, weight initialization strategies, and regularisation methods has enabled these models to achieve remarkable performance, defying the expectations of many in the field.&nbsp;</p><h1>2. Emergent Properties in Deep Neural Networks</h1><p>Okay, we are getting deep into the magic bit now, i.e., the emergence of complex properties and capabilities that were not explicitly programmed into the network. These emergent properties arise naturally from the training process, allowing deep neural networks to learn hierarchical representations and abstract concepts that surpass traditional machine learning approaches. Do you wonder why ChatGPT can sometimes spit out a perspective that hits deep and makes you wonder about the worth of humanity? This is how it does that.</p><p>One of the most compelling examples of emergent properties in deep learning is <em>hierarchical feature learning,</em> particularly evident in <em>Convolutional Neural Networks (CNNs)</em> used for image recognition. At the outset, I was intrigued by how CNNs could transform raw pixel data into meaningful insights. The initial layers of a CNN detect simple features such as edges, corners, and textures. These low-level features are then combined in subsequent layers to form more complex patterns, like shapes and parts of objects. By the final layers, the network can recognise entire objects, such as faces, animals, or vehicles, with remarkable accuracy. This hierarchical approach mirrors the visual processing in the human brain, where simple features are progressively integrated into complex perceptions.</p><p>This ability to learn hierarchical features is not limited to image data. <em>Recurrent Neural Networks (RNNs)</em> and their advanced variants, like <em>Long Short-Term Memory (LSTM)</em> networks and <em>Gated Recurrent Units (GRUs),</em> demonstrate emergent properties in sequence processing tasks. When working with sequential data, such as natural language or time series, RNNs can capture dependencies and patterns across different time steps. LSTMs and GRUs address the vanishing gradient problem inherent in traditional RNNs, allowing the network to retain information over long sequences. This capability enables RNNs to understand context and generate coherent text, as seen in language models like GPT-4.</p><p>This is where the &#8216;creativity&#8217; of generative models originates from. The so-called <em>&#8216;emergent behaviour&#8217;,</em> i.e., the ability of deep learning models to generate entirely new content. Generative Adversarial Networks (GANs), introduced by Ian Goodfellow and his colleagues, exemplify this. GANs consist of two networks, a <em>generator</em> and a <em>discriminator,</em> that are trained simultaneously. The generator creates synthetic data, while the discriminator evaluates its authenticity. Through this adversarial process, GANs can produce highly realistic images, music, and even video.</p><p>The emergence of these properties is <em>&#8216;magical&#8217;</em> because the underlying principles of neural networks are relatively simple. The power of deep learning lies in the interactions between large numbers of neurons and the data-driven learning process. As networks grow deeper and more complex, they begin to exhibit behaviours that are not explicitly encoded in their design.&nbsp;</p><p>When CNNs are trained on vast and diverse datasets, they can generalise to new images with different styles, backgrounds, and distortions. This robustness is partly due to the extensive data augmentation techniques used during training, where images are randomly transformed to mimic real-world variations. This ability to generalise to new data is a key factor in the success of deep learning models in practical applications.</p><p>However, where the &#8216;creativity&#8217; originates, so do the problems. All the <em>hallucination</em> that occurs in the modern day generative models is a result of this very emergent behaviour itself. Understanding and interpreting the internal workings of deep neural networks remains difficult, often likened to a <em>"black box."</em> While techniques like <em>feature visualisation</em> and <em>activation maximisation</em> provide some insights, they are limited in their ability to fully explain the network's behaviour. This lack of interpretability raises concerns, particularly in applications where transparency and trust are critical.</p><h1>3. Generalisation</h1><p>One of the most astonishing and &#8216;awe-inspiring&#8217; aspects of deep learning is its remarkable ability to generalise from <em>training data</em> to <em>unseen data.</em> This capability defies traditional expectations in machine learning, where models often struggle with <em>overfitting,</em> performing well on training data but failing to <em>generalise</em> to new, unseen examples. In deep learning, even with models containing millions or billions of parameters, we observe a surprising resilience against <em>overfitting,</em> achieving impressive performance across a wide range of tasks.</p><p>To understand this phenomenon, it's essential to understand the concepts of generalisation and overfitting. Generalisation refers to a model&#8217;s ability to perform well on new, unseen data that was not part of the training set. Overfitting occurs when a model learns not only the underlying patterns in the training data but also the noise and specificities that do not generalise. Traditional machine learning models, especially those with high capacity, tend to overfit when trained on small datasets.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MAW5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cabb5e4-e757-4145-adca-b4e6e6a97728_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MAW5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cabb5e4-e757-4145-adca-b4e6e6a97728_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!MAW5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cabb5e4-e757-4145-adca-b4e6e6a97728_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!MAW5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cabb5e4-e757-4145-adca-b4e6e6a97728_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!MAW5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cabb5e4-e757-4145-adca-b4e6e6a97728_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MAW5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cabb5e4-e757-4145-adca-b4e6e6a97728_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2cabb5e4-e757-4145-adca-b4e6e6a97728_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MAW5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cabb5e4-e757-4145-adca-b4e6e6a97728_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!MAW5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cabb5e4-e757-4145-adca-b4e6e6a97728_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!MAW5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cabb5e4-e757-4145-adca-b4e6e6a97728_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!MAW5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cabb5e4-e757-4145-adca-b4e6e6a97728_1000x1000.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Artwork - Generalisation Across Data</em></figcaption></figure></div><p>Deep learning models, however, often manage to generalise well even when trained on relatively small datasets relative to their capacity. Several factors contribute to this counterintuitive behaviour. Firstly, the sheer size and depth of these models allow them to learn a wide variety of features at different levels of abstraction. This hierarchical learning enables the models to capture both low-level and high-level patterns, making them more robust to variations in the data.</p><p>Data augmentation plays a critical factor in improving generalisation. By artificially expanding the training dataset through transformations such as rotations, translations, and scaling, data augmentation exposes the model to a wider variety of examples. This practice prevents the model from memorising specific instances and encourages it to learn more generalisable features. In image recognition tasks, for instance, augmented datasets ensure that the model can recognise objects regardless of variations in viewpoint, lighting, or background.</p><p>When trained on vast amounts of data, these models encounter a wide range of examples and variations, enabling them to learn more robust and generalisable representations. This is particularly evident in models pre-trained on large-scale datasets such as ImageNet for visual tasks or extensive text corpora for language models. The knowledge gained from these large datasets can be transferred to new tasks through fine-tuning, leveraging the generalisation capability acquired during pre-training.</p><p>Now, all of these are &#8216;reasons&#8217; why deep learning models adapt so well to new data. Don&#8217;t confuse them with &#8216;explanations&#8217; as to why deep learning is so good at generalisation. Despite these advancements, the theoretical understanding of why deep learning models generalise so well remains incomplete. Traditional statistical learning theory suggests that models with a high capacity <em>should overfit</em> unless strong regularisation is applied. However, deep learning models often violate these expectations, achieving low training error and low test error simultaneously.&nbsp;</p><p>This paradox has led to the exploration of new theoretical frameworks and hypotheses. One such hypothesis is the concept of <em>over-parameterisation.</em> In deep learning, models are often significantly over-parameterised, meaning they have far more parameters than necessary to fit the training data. Surprisingly, this over-parameterisation does not necessarily lead to overfitting. Instead, it appears to enable the model to find simpler, more generalisable solutions in the high-dimensional parameter space. This aligns with the empirical observation that larger models often perform better, even when the risk of overfitting should be higher.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mf_9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F009d4fb8-7ff8-488b-996a-1d89d19c7927_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mf_9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F009d4fb8-7ff8-488b-996a-1d89d19c7927_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!mf_9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F009d4fb8-7ff8-488b-996a-1d89d19c7927_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!mf_9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F009d4fb8-7ff8-488b-996a-1d89d19c7927_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!mf_9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F009d4fb8-7ff8-488b-996a-1d89d19c7927_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mf_9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F009d4fb8-7ff8-488b-996a-1d89d19c7927_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/009d4fb8-7ff8-488b-996a-1d89d19c7927_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mf_9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F009d4fb8-7ff8-488b-996a-1d89d19c7927_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!mf_9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F009d4fb8-7ff8-488b-996a-1d89d19c7927_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!mf_9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F009d4fb8-7ff8-488b-996a-1d89d19c7927_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!mf_9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F009d4fb8-7ff8-488b-996a-1d89d19c7927_1000x1000.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Artwork - Over-Parameterisation</em></figcaption></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/why-deep-learning-is-magic?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/why-deep-learning-is-magic?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p>The <em>lottery ticket hypothesis</em> is another intriguing explanation. Proposed by Jonathan Frankle and Michael Carbin, this hypothesis suggests that within a large, <em>over-parameterised</em> model, there exists a smaller sub-network, i.e., a <em>"winning ticket",</em> that, when trained in isolation, can achieve similar performance to the original model. Finding these winning tickets through pruning and retraining can lead to more efficient models with strong generalisation capabilities.</p><h1>4. Transfer Learning and Fine Tuning</h1><p>Transfer learning is exactly what it sounds like. This approach leverages the knowledge gained from training on one task to improve performance on a related but different task. Transfer learning has significantly accelerated progress in various domains, allowing researchers and practitioners to build powerful models with relatively small amounts of task-specific data. When I first encountered transfer learning, I was amazed by its simplicity and effectiveness, which seemed almost counterintuitive given the traditional machine learning emphasis on <em>task-specific training.</em></p><p>Transfer learning typically involves two main steps: <em>pre-training</em> and <em>fine-tuning.</em> During <em>pre-training,</em> a deep learning model is trained on a large dataset, often encompassing a wide range of categories and examples. This extensive training allows the model to learn a rich set of features and representations that capture the underlying structure of the data. The pre-trained model can then be <em>fine-tuned</em> for a specific task by training it on a smaller, task-specific dataset. Fine-tuning adjusts the pre-trained model&#8217;s parameters to better suit the new task while retaining the general knowledge acquired during pre-training.</p><p>A prominent example of transfer learning in action is the use of convolutional neural networks (CNNs) pre-trained on the ImageNet dataset. ImageNet contains millions of labelled images across thousands of categories, providing a comprehensive training ground for visual feature extraction. Models like VGG, ResNet, and Inception have been pre-trained on ImageNet and are widely used as starting points for various computer vision tasks. By fine-tuning these pre-trained models on smaller datasets, such as medical images or specific object detection tasks, researchers can achieve high performance with significantly less data and computational resources. The big-tech uses this technique with their models all the time. Transfer learning has revolutionised the field with models like <em>Gemini</em> and <em>GPT</em> (Generative Pre-trained Transformer).</p><p>GPT, particularly GPT-4, takes transfer learning to another level. GPT-4 is pre-trained on a diverse and massive text dataset, allowing it to generate coherent and contextually appropriate text across various tasks without task-specific fine-tuning.&nbsp;</p><p>Other forms of transfer learning include <em>feature extraction</em> and <em>domain adaptation.</em> <em>Feature extraction</em> involves using the pre-trained model as a fixed feature extractor, where the model&#8217;s learned representations are fed into a simpler classifier for the target task. This method is useful when computational resources are limited or when the target task has a very small dataset. <em>Domain adaptation,</em> on the other hand, focuses on adapting a model trained on one domain to perform well on a related but different domain. Techniques such as <em>adversarial training</em> and <em>domain adversarial neural networks (DANN) </em>are used to bridge the gap between source and target domains.</p><h1>5. Optimisation and Convergence in High Dimensional Spaces</h1><p>Saving the best for last, I would like to talk about one of the most perplexing and fascinating aspects of deep learning, i.e., its ability to converge to effective solutions despite the immense complexity of its optimisation landscapes. When I first studied the optimisation process in deep neural networks 4 years ago, I was struck by the sheer scale and difficulty of the task. These models often contain millions or even billions of parameters, and their loss functions are highly non-convex, filled with numerous local minima, saddle points, and flat regions. Yet, deep learning models not only manage to find good solutions but often do so efficiently and effectively. As a fresher in my master&#8217;s course in AI, this always used to baffle me, triggering long discussions with my AI Lab professor regarding this phenomenon.</p><p>The optimisation challenge in deep learning arises from the high-dimensional nature of neural networks. Each weight in the network represents a dimension in the parameter space, creating an optimisation problem with thousands or millions of dimensions. The loss function, which measures the discrepancy between the model&#8217;s predictions and the actual data, forms a complex landscape in this high-dimensional space. Traditional optimisation problems, especially those with convex loss functions, are relatively straightforward to solve, as they guarantee a single global minimum. In contrast, the non-convex nature of neural network loss functions means there are multiple local minima and saddle points, making the optimisation process much more challenging.</p><p>Despite these challenges, deep learning models have consistently demonstrated an ability to find effective solutions.&nbsp;</p><p>Another critical aspect of effective optimisation in deep learning is the initialisation of network weights. Proper initialisation is essential to ensure that gradients do not vanish or explode as they propagate through the network. The <em>vanishing gradient problem</em> occurs when gradients become exceedingly small, leading to negligible updates to the weights and slow learning. Conversely, the exploding gradient problem happens when gradients grow exponentially, causing unstable updates and diverging weights. Techniques such as <em>Xavier (Glorot) initialisation</em> and <em>He initialisation</em> address these issues by setting the initial weights based on the number of input and output units in the network, maintaining a stable gradient flow during training.</p><p>While these optimisation techniques have propelled the success of deep learning, the theoretical understanding of why they work so well is still evolving. Traditional optimisation theory suggests that finding the global minimum in high-dimensional, non-convex landscapes should be exceedingly difficult. However, empirical evidence shows that deep learning models often find solutions that generalise well to new data, even if they are not the global minimum.&nbsp;</p><p>Why this disparity? One hypothesis is that the <em>loss landscape</em> of deep neural networks contains a large number of flat or nearly flat regions, known as <em>"good" local minima, </em>that generalise well. These regions may be connected by paths of low loss, allowing the optimisation process to move between them relatively easily. This view aligns with the empirical success of SGD and its variants, which seem to navigate these landscapes effectively.</p><p>Techniques such as <em>stochastic gradient descent, adaptive optimisers</em> like <em>Adam, proper weight initialisation,</em> and <em>batch normalisation</em> have played important roles. While our theoretical understanding continues to develop, the empirical effectiveness of these methods highlights the &#8216;magic&#8217; of deep learning.&nbsp;</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/why-deep-learning-is-magic?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Like what you&#8217;re reading? Don&#8217;t forget to share!</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/why-deep-learning-is-magic?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/why-deep-learning-is-magic?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h1>Addressing Our Gap in the Theoretical Understanding of Deep Learning</h1><p>At the end of this article, I would like to share a few short notes on why deep learning still feels like magic to many. If you have made it this far, you are a true lover of the field and I applaud your interest. Perhaps you can help bridge the gap between the theory and the outcomes.&nbsp;</p><p>Simply put, the rapid advancements in deep learning have <em>outpaced</em> our theoretical understanding of why these models work so well.&nbsp;</p><p>As I explored this field, I realised that while we have developed highly effective methods and models, our understanding of the underlying principles remains incomplete. This is similar to our conundrum with Quantum Computing. We understand enough of Quantum phenomena to build a computer based on it. But we lack an understanding of how the phenomena itself works or why.&nbsp;</p><p>The <em>information bottleneck theory</em> is another framework that has been proposed to explain deep learning&#8217;s success. This theory, introduced by Naftali Tishby and colleagues, suggests that deep neural networks learn to compress input data into a compact representation that retains only the most relevant information for the task at hand. During training, the network undergoes a process of initial fitting, where it captures both relevant and irrelevant information, followed by a compression phase, where it discards the irrelevant information. The resulting compressed representation is more robust and generalises better to new data. This perspective provides a potential explanation for the stages observed in the training dynamics of deep networks.</p><p>Despite this hypothesis, significant gaps remain in our theoretical understanding. For instance, the exact mechanisms by which deep networks navigate high-dimensional, non-convex loss landscapes and consistently find good solutions are not fully understood. The role of various architectural choices, such as depth, width, and activation functions, in shaping the loss landscape and influencing generalisation is an ongoing area of research. I hate to say this, but when coming to these features, scientists are mostly still &#8216;winging it&#8217;.&nbsp;</p><p>Recent advancements in theoretical research have started to bridge some of these gaps. For example, the <em>neural tangent kernel (NTK)</em> theory provides insights into the training dynamics of infinitely wide neural networks. According to NTK theory, as the width of a neural network increases, its behaviour during training becomes more predictable and linear, resembling kernel methods. This theory helps explain why very wide networks are easier to optimise and often generalise well, as their training dynamics become more stable and tractable.</p><p>Another promising direction is the <em>mean-field theory</em> of neural networks, which studies the behaviour of networks in the limit of large width. Mean-field theory models the collective behaviour of neurons in a layer, providing a statistical description of the network&#8217;s dynamics. This approach has yielded insights into the convergence properties of deep networks and the impact of architectural choices on their performance.</p><p>While deep learning&#8217;s empirical success continues to amaze and inspire, our theoretical understanding is still catching up. The ongoing research into the theoretical foundations of deep learning promises to deepen our understanding. However, as of yet, a lot of what is deep learning remains &#8216;magical&#8217;.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/1ftlgyuM4Sfo6F87CWdi4vYfw8hPImERw/view?usp=sharing&quot;,&quot;text&quot;:&quot;Download the Article as PDF&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/1ftlgyuM4Sfo6F87CWdi4vYfw8hPImERw/view?usp=sharing"><span>Download the Article as PDF</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arkinfo Notes! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/why-deep-learning-is-magic/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/why-deep-learning-is-magic/comments"><span>Leave a comment</span></a></p><h2></h2>]]></content:encoded></item><item><title><![CDATA[A Message of Hope and Caution for the Ongoing Tech Turmoil]]></title><description><![CDATA[Putting into Perspective the role of an Engineer in a Tech Company and the Importance of Treating Your Employees Better]]></description><link>https://notes.arkinfo.xyz/p/a-message-of-hope-and-caution-for</link><guid isPermaLink="false">https://notes.arkinfo.xyz/p/a-message-of-hope-and-caution-for</guid><dc:creator><![CDATA[Mir H. S. Quadri]]></dc:creator><pubDate>Mon, 08 Jul 2024 10:54:03 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!OrMc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70dd823e-5b89-4cc5-9d15-c651d86f789d_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Fellow Readers!</p><p>For my comeback article after a brief vacation that I took in June, I had planned for some hardcore engineering article like a blueprint for inhabiting Mars or a solution for cleaning space garbage (he said humbly). But turns out there is a more pressing concern that needs to be addressed back here on earth first - the delirious state of employees in the tech space at large, but more specifically, engineers.&nbsp;</p><p>As a software engineer witnessing the turbulent shifts within our industry, especially here in Silicon Valley, has been disheartening. Job cuts, reduced amenities, and decreased pay have become alarmingly common. According to recent data, thousands of software engineering positions have been eliminated across major tech companies. The Big Tech are often the ones making the headlines. But the turmoil is much bigger than just the Big Tech. Everything that is &#8216;Tech&#8217; is currently impacted in one way or another.&nbsp;</p><p>What makes it worse is that these changes are happening at a time when the cost of living remains incredibly high, exacerbating the stress and uncertainty many of us feel.</p><p>Adding to this anxiety is the persistent narrative that AI/AGI will replace software engineers, rendering our skills obsolete.&nbsp;</p><p>With all this going, I found it imperative that I share some brief notes in the hopes of helping engineers and tech employees persevere through this turmoil. And also use this opportunity to share a word of caution with my fellow founders and CEOs of tech companies across the world.&nbsp;</p><p>Don&#8217;t worry, this will be a quick read. Just 5 notes that put into perspective what a software engineer is, why they cannot be easily replaced, and why the industry leaders need to abandon counterintuitive business practices and support their employees.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OrMc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70dd823e-5b89-4cc5-9d15-c651d86f789d_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OrMc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70dd823e-5b89-4cc5-9d15-c651d86f789d_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!OrMc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70dd823e-5b89-4cc5-9d15-c651d86f789d_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!OrMc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70dd823e-5b89-4cc5-9d15-c651d86f789d_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!OrMc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70dd823e-5b89-4cc5-9d15-c651d86f789d_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OrMc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70dd823e-5b89-4cc5-9d15-c651d86f789d_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/70dd823e-5b89-4cc5-9d15-c651d86f789d_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OrMc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70dd823e-5b89-4cc5-9d15-c651d86f789d_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!OrMc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70dd823e-5b89-4cc5-9d15-c651d86f789d_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!OrMc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70dd823e-5b89-4cc5-9d15-c651d86f789d_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!OrMc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F70dd823e-5b89-4cc5-9d15-c651d86f789d_1000x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Artwork - The Human/AI Synergy</em></figcaption></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/subscribe?"><span>Subscribe now</span></a></p><h1>1. Engineering is More than Just Coding</h1><p>What&#8217;s an engineer at a tech company? A coder. Wrong. Engineering is so much more than just writing lines of code. As engineers, we are architects of complex systems, problem solvers, and strategic thinkers. Our role involves designing and maintaining the intricate frameworks that power today&#8217;s digital world.</p><p>One of the most crucial aspects of our work is system architecture. This involves creating the blueprint for a system, ensuring that all components function seamlessly together. System architects need a deep understanding of both software and hardware, as well as the ability to foresee potential challenges and plan accordingly. This skill set is something that AI, in its current state, cannot replicate. Designing systems requires a level of creativity, foresight, and holistic understanding that goes beyond algorithmic processing.</p><p>Engineering involves continuous problem-solving. Whether it&#8217;s debugging a piece of code, optimising performance, or integrating new technologies into existing systems, engineers are constantly faced with challenges that require innovative solutions. This problem-solving aspect is where human ingenuity shines. While AI can assist by suggesting solutions or identifying issues, the nuanced decision-making and contextual understanding required to implement these solutions effectively are distinctly human capabilities.</p><p>Engineers play a vital role in strategic planning and project management. We are often responsible for guiding projects from conception through to completion, which involves coordinating teams, managing resources, and ensuring that all elements of a project align with the overall goals. This leadership aspect is another area where human skills are irreplaceable. Effective communication, empathy, and the ability to motivate and manage a team are essential components of successful project management that AI lacks.</p><p>Can AI do all of this today? Impossible. Can AI do all of this in 10 years? My take: not even by a long shot. Perhaps 2034 will prove me wrong. I just don&#8217;t see how. I do concede that the recent layoffs and changes in the tech industry highlight a pressing need for engineers who can wear multiple hats, i.e., those who can code, but also design, strategize, and lead. Sure, we need more skills as the industry evolves. But that does not prove how engineers are replaceable by any means despite the rise of AI.</p><h1>2. Even if Engineering is Just Coding, AI Still Cannot Replace Engineers</h1><p>Even if we were to narrow our perspective and consider software engineering as merely coding, the notion that AI could replace engineers is still far from reality. AI has made significant strides in recent years, particularly in automating certain coding tasks and improving efficiency. However, its limitations in addressing real-life coding problems are evident and substantial.</p><p>Recent research from <a href="https://aiindex.stanford.edu/report/">Stanford University</a> indicates that while AI can assist with code generation and bug detection, it struggles with understanding context and integrating solutions into complex, real-world systems.&nbsp;</p><p>Moreover, AI's inability to fully comprehend and solve complex coding issues is evident in its performance on competitive programming platforms. AI models can solve basic coding challenges, their performance drops significantly on more complex problems that require deeper logical reasoning and understanding of intricate dependencies.&nbsp;</p><p>Another limitation of AI in real-life coding is its dependency on high-quality training data. AI models learn from existing codebases, which means they can inadvertently propagate mistakes or suboptimal practices found in the data. Unlike human engineers, who can critically analyse and improve upon existing solutions, AI lacks the judgement to discern between good and bad practices.&nbsp;</p><p>Real-world software development involves continuous adaptation to changing requirements and unforeseen challenges. Human engineers excel in environments that require flexibility and creative problem-solving. For instance, during the development of a new feature, engineers must often interact with stakeholders, gather feedback, and iterate on their designs. These iterative processes demand a level of human intuition and adaptability that AI does not possess.</p><p>AI has undoubtedly enhanced certain aspects of coding, but its limitations in solving real-life coding problems are significant. The context, creativity, and collaborative skills that human engineers bring to the table are too significant and nuanced to be replaced by AGI (whatever that means), in 10 years or more. Some might argue that it's only a matter of time. I say, it's a matter of true understanding and sentience, both of which require a different approach than the one that current AI models are taking.</p><h1>3. Turmoil is Temporary</h1><p>The disruptions we are experiencing in the tech industry today, including layoffs and economic uncertainties, are undeniably challenging. However we cannot forget that they are temporary all the same.&nbsp;</p><p>AI and AGI will follow a similar trajectory. While these technologies will automate certain repetitive and mundane tasks, they will also enhance our capabilities, allowing us to focus on more complex and creative aspects of our work. This shift will create new roles and specialties within the engineering field. For instance, as AI systems become more integrated into business operations, there will be a growing demand for engineers who can design, implement, and maintain these systems. Additionally, new fields such as AI ethics, AI safety, and human-AI interaction are emerging. As AI replaces the old work, there is so much potential for new work to not only be found but also created.&nbsp;</p><p>A study by the <a href="https://edisonandblack.com/pages/over-97-million-jobs-set-to-be-created-by-ai.html">World Economic Forum</a> predicts that AI and automation could create 97 million new jobs by 2025, outpacing the number of jobs displaced by these technologies. These new roles will span various sectors, including data analysis, AI development, cybersecurity, and digital marketing, among others.&nbsp;</p><p>The integration of AI into various industries will lead to the development of new products and services, driving economic growth and innovation. Engineers who can leverage AI to solve real-world problems will be at the forefront of this transformation. AI-powered tools are already being used to enhance healthcare delivery, optimise supply chains, and improve environmental sustainability.&nbsp;</p><p>So what does all this mean? It means that the current turmoil in the tech industry is a temporary phase. AI/AGI will ultimately enhance our capabilities, create new job opportunities, and drive economic growth. By focusing on continuous learning and adaptability, we can figure out these changes successfully and emerge stronger and more innovative.&nbsp;</p><h1>4. This is the Age of Problem Solvers</h1><p>We are currently living in an era that values problem solvers more than ever before. With the power of AI on our side, each one of us now has the ability to become our own Nikola Tesla or Steve Jobs. This means we need to start thinking beyond traditional career paths and the allure of working for the Big Tech.&nbsp;</p><p>The traditional path of securing a job at a major tech company is no longer the only or the most desirable route for many engineers. These companies often come with limitations on creativity and the scope of impact. Engineers today have the opportunity to leverage their skills to solve significant global issues, from climate change and healthcare to education and social inequality.&nbsp;</p><p>AI can be used to analyse vast amounts of data to identify patterns and insights that were previously inaccessible. This capability can be applied to a wide range of fields, such as predicting climate patterns, optimising resource allocation in agriculture, and improving disease diagnosis and treatment in healthcare.</p><p>Startups and smaller companies are often at the forefront of this kind of innovation. They are more agile and willing to take risks, providing engineers with the freedom to experiment and develop groundbreaking solutions. By joining or founding startups focused on solving real-world problems, engineers can have a direct and tangible impact.&nbsp;</p><p>An excellent example of engineers making a difference is the rise of social impact tech startups. These companies aim to address societal challenges using technology. Engineers working in these startups are developing solutions for clean energy, affordable healthcare, and accessible education, among other areas. Their work demonstrates that engineers can be more impactful by aligning their skills with global needs rather than just pursuing traditional corporate roles.</p><p>Engineers who can think critically and adapt to new technologies will always be in demand. The ability to learn and apply new tools, such as machine learning algorithms, blockchain technology, or quantum computing, can set you apart in a competitive job market.&nbsp;</p><p>To truly embrace the age of problem solvers, we must also cultivate a mindset of continuous learning and curiosity. This is the age for engineers who are problem solvers. Engineers can drive meaningful change and achieve success beyond traditional career paths. The opportunities are vast and varied, and those who embrace this mindset will find themselves at the forefront of innovation and impact. Now is the time to think big, be bold, and use our engineering skills to make a difference in the world.</p><h1>5. A Word of Caution for Big Tech &amp; Founders of Tech Companies</h1><p>Last but not the least, I would like to share a word of caution with the industry leaders and founders &amp; CEOs of tech companies across the world.&nbsp;</p><p>The rapid advancements in AI and technology have been made possible by the relentless efforts and innovative spirit of your employees. They are the backbone of your organisations, driving the development of cutting-edge technologies and maintaining the vast infrastructure that powers your services. However, the recent trends of job cuts, reduced amenities, and decreased pay are alarming and counterproductive.</p><p>Your employees are not just cogs in a machine. They are the lifeblood of your company. Disempowering them through layoffs and diminishing work conditions not only undermines their morale but also jeopardises the future of your organisation. The talent and dedication of your engineers are what fuel the innovation and competitive edge that tech companies are known for.</p><p>It's essential to recognise that investing in your employees is investing in your company's future. Happy and motivated employees are more productive, creative, and loyal. They are the ones who will develop the next groundbreaking technologies and ensure that your company remains at the forefront of the industry. Providing them with the resources, support, and respect they deserve is not just good ethics, it's sound business strategy.</p><p>The narrative that AI will replace engineers and other employees is not only misleading but also detrimental to your workforce's confidence. Instead of fostering fear and uncertainty, focus on how AI can augment your employees' capabilities. Create an environment where AI and human ingenuity can coexist and thrive together. Encourage your employees to explore new ways of integrating AI into their workflows, enhancing their productivity and innovation potential.</p><p>Maintaining a healthy work-life balance is crucial for sustaining long-term productivity and creativity. Overworking your employees and reducing their benefits can lead to burnout, high turnover rates, and a decline in the quality of work.&nbsp;</p><p>A culture of continuous learning and development is vital. Encourage your employees to pursue further education, attend conferences, and stay updated with the latest advancements in technology. Providing opportunities for professional growth will not only benefit your employees but also bring fresh ideas and innovations to your company.</p><p>The key to sustaining the success and innovation of tech companies lies in treating your engineers and employees with the respect and support they deserve. They are the ones who drive your company's technological advancements and ensure its competitive edge. By investing in their well-being, creating a positive work environment, and embracing the synergy between AI and human creativity, you can build a stronger, more resilient organisation. The future of your company depends on the talent and dedication of your engineers, so empower them, and they will lead you to new heights.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/1wPJCMTyfsS9ECAJxZdlNyneNNx6mAlS-/view?usp=sharing&quot;,&quot;text&quot;:&quot;Download the Article as PDF&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/1wPJCMTyfsS9ECAJxZdlNyneNNx6mAlS-/view?usp=sharing"><span>Download the Article as PDF</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arkinfo Notes! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/a-message-of-hope-and-caution-for?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/a-message-of-hope-and-caution-for?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/a-message-of-hope-and-caution-for/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/a-message-of-hope-and-caution-for/comments"><span>Leave a comment</span></a></p>]]></content:encoded></item><item><title><![CDATA[A Primer on Computational Pragmatics]]></title><description><![CDATA[Discussing Context Models, Dialogue Systems, Speech Acts, Implicature, Inference, Reference Resolution, Coreference, & Discourse Analysis]]></description><link>https://notes.arkinfo.xyz/p/a-primer-on-computational-pragmatics</link><guid isPermaLink="false">https://notes.arkinfo.xyz/p/a-primer-on-computational-pragmatics</guid><dc:creator><![CDATA[Mir H. S. Quadri]]></dc:creator><pubDate>Mon, 27 May 2024 00:30:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!hx9K!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c305867-1058-4be7-9134-ff23d1a52f5e_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Fellow Nerds, Engineers, and Engineering Nerds!</p><p>I am happy to bring forth another treat for you this week. Its <strong><a href="https://drive.google.com/file/d/1-9aEMnfaEgGbXYk2YYf-Qbm4wJvJiFKz/view?usp=sharing">A complete Primer on Computational Pragmatics</a>.</strong></p><p><strong>Pragmatics,</strong> a fundamental branch of <strong>linguistics,</strong> delves into <strong>the ways in which context contributes to meaning.</strong> Unlike semantics, which focuses on the inherent meaning of words and sentences, pragmatics considers <strong>how language is used in real-world situations.</strong> Understanding this distinction is crucial for anyone working with natural language processing (NLP) and artificial intelligence (AI).&nbsp;</p><p><strong>If you want to find out the next BIG frontier in AI Chatbots and Assistants, it's this!</strong> In this article, I aim to explore the fascinating field of computational pragmatics, where the principles of pragmatics intersect with AI. This area of study seeks to equip machines with the ability to understand and interpret human language as effectively as possible, accounting for the nuances of context that make human communication so rich and complex.</p><p>The purpose of this deep dive is to shed light on the key research areas within computational pragmatics, discussing both the theoretical underpinnings and the practical applications. From context modelling to dialogue systems and implicature, each topic will be examined to reveal how computational techniques are being leveraged to advance our understanding of language use. </p><p><strong>I promise you, this is going to be one of the most thorough primers you have ever read on the field.</strong> By the end of this article you will understand the current state of computational pragmatics and also be inspired to further explore and innovate. Whether you are developing the next generation of conversational agents or simply curious about the intersection of language and technology, this primer will help you big time. </p><p>This one&#8217;s a 25 page mammoth! So I am sharing with you below, a link to <strong>download this article in PDF format absolutely for FREE.</strong> Enjoy, and do let me know what you think about it!</p><p>Happy Reading!</p><p><strong><a href="https://mirhsquadri.com/">Mir</a></strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/1-9aEMnfaEgGbXYk2YYf-Qbm4wJvJiFKz/view?usp=sharing&quot;,&quot;text&quot;:&quot;Download the Article as PDF for FREE&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/1-9aEMnfaEgGbXYk2YYf-Qbm4wJvJiFKz/view?usp=sharing"><span>Download the Article as PDF for FREE</span></a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hx9K!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c305867-1058-4be7-9134-ff23d1a52f5e_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hx9K!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c305867-1058-4be7-9134-ff23d1a52f5e_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!hx9K!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c305867-1058-4be7-9134-ff23d1a52f5e_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!hx9K!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c305867-1058-4be7-9134-ff23d1a52f5e_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!hx9K!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c305867-1058-4be7-9134-ff23d1a52f5e_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hx9K!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c305867-1058-4be7-9134-ff23d1a52f5e_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6c305867-1058-4be7-9134-ff23d1a52f5e_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1825055,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!hx9K!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c305867-1058-4be7-9134-ff23d1a52f5e_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!hx9K!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c305867-1058-4be7-9134-ff23d1a52f5e_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!hx9K!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c305867-1058-4be7-9134-ff23d1a52f5e_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!hx9K!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c305867-1058-4be7-9134-ff23d1a52f5e_1000x1000.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Artwork - The World of Computational Pragmatics</figcaption></figure></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arkinfo Notes! Subscribe for free to receive new posts and free course materials and resources.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/a-primer-on-computational-pragmatics?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Like what you&#8217;re reading? Spread the love!</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/a-primer-on-computational-pragmatics?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/a-primer-on-computational-pragmatics?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/a-primer-on-computational-pragmatics/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/a-primer-on-computational-pragmatics/comments"><span>Leave a comment</span></a></p>]]></content:encoded></item><item><title><![CDATA[Summer Reads for Nerds, Engineers, & the Curious]]></title><description><![CDATA[Book Recommendations for Summer 2024 from AI & Blockchain to Mars and Tennis]]></description><link>https://notes.arkinfo.xyz/p/summer-reads-for-nerds-engineers</link><guid isPermaLink="false">https://notes.arkinfo.xyz/p/summer-reads-for-nerds-engineers</guid><dc:creator><![CDATA[Mir H. S. Quadri]]></dc:creator><pubDate>Mon, 13 May 2024 00:30:10 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!QqvS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d3e7fa5-f074-4ab1-8ecc-d0f2c1b5cd30_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Fellow Nerds,</p><p>As summer makes an entrance for many of you in the western hemisphere, here in India, we are reaching its conclusion (hopefully). The sun here is no joke. I spend most of my days dodging its relentless glare.&nbsp;</p><p>My schedule hasn't been any kinder as well, suddenly packed with a slew of unexpected projects. Sure it's been chaotic, but hey, chaos has its perks too! It&#8217;s pushed me back to my stacks of unread books, allowing me to sneak in some reading between the madness.</p><p>I&#8217;ve also been diving into learning Japanese and Spanish. Balancing verb conjugations with project deadlines is a juggling act, but it's thrilling in its own right. It reminds me of a Japanese saying, <em>"&#32153;&#32154;&#12399;&#21147;&#12394;&#12426;" (Keizoku wa chikara nari),</em> meaning <em>"Perseverance brings strength."</em> It&#8217;s a good reminder that keeping at it, whether it&#8217;s language learning or any other challenge, builds more than just skill; it builds character.</p><p>So with that in mind, I would like to share with you an eclectic collection of summer reads that you might enjoy. I have got everything packed in here from interesting textbooks on Deep Learning and Blockchain, to some really cool books on Mars and even Tennis (although that one&#8217;s more of a life guide than a Tennis guide). So let&#8217;s jump right in.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/1VHsTGIXow2uNrErp8rxcBx8hEX5kBJSZ/view?usp=sharing&quot;,&quot;text&quot;:&quot;Download Article as PDF&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/1VHsTGIXow2uNrErp8rxcBx8hEX5kBJSZ/view?usp=sharing"><span>Download Article as PDF</span></a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QqvS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d3e7fa5-f074-4ab1-8ecc-d0f2c1b5cd30_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QqvS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d3e7fa5-f074-4ab1-8ecc-d0f2c1b5cd30_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!QqvS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d3e7fa5-f074-4ab1-8ecc-d0f2c1b5cd30_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!QqvS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d3e7fa5-f074-4ab1-8ecc-d0f2c1b5cd30_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!QqvS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d3e7fa5-f074-4ab1-8ecc-d0f2c1b5cd30_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QqvS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d3e7fa5-f074-4ab1-8ecc-d0f2c1b5cd30_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1d3e7fa5-f074-4ab1-8ecc-d0f2c1b5cd30_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!QqvS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d3e7fa5-f074-4ab1-8ecc-d0f2c1b5cd30_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!QqvS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d3e7fa5-f074-4ab1-8ecc-d0f2c1b5cd30_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!QqvS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d3e7fa5-f074-4ab1-8ecc-d0f2c1b5cd30_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!QqvS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1d3e7fa5-f074-4ab1-8ecc-d0f2c1b5cd30_1000x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Artwork - Summer Reading</figcaption></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/subscribe?"><span>Subscribe now</span></a></p><h2>Deep Learning Foundations and Concepts by Christopher M. Bishop</h2><p>I had to start with this <a href="https://www.amazon.com/Deep-Learning-Foundations-Christopher-Bishop/dp/3031454677/">one</a>. I have been reading a bit of this every day for the past two months or so. It's most definitely one of my favourite deep learning books now. It's actually a very very difficult task to write a good Deep Learning book! And I think Bishop did an excellent job at making a very convoluted topic (pun intended) way simpler and comprehensible.</p><p>The book dives deep into various neural network architectures, which are the backbone of many modern AI applications. This includes the likes of <em>CNNs (Convolutional Neural Networks)</em> for image-related tasks and <em>RNNs (Recurrent Neural Networks)</em> for sequential data like time series or natural language. These architectures are important because they allow machines to automatically learn complex patterns from data, a fundamental capability for innovations in fields like autonomous driving and real-time language translation.</p><p>Bishop goes into detail on how to effectively train these models using algorithms such as <em>backpropagation</em> and various forms of <em>gradient descent.</em> This discussion is central because the efficiency and accuracy of learning directly influence the practical deployment of models in real-world applications, where computational resources and time are often limited.</p><p>Techniques such as <em>dropout</em> and <em>batch normalisation</em> are covered comprehensively. These are essential for building models that not only perform well on training data but can generalise to new, unseen data without overfitting.&nbsp;</p><p>The book also includes a self-contained introduction to probability theory, which is fundamental for understanding how to model uncertainty in predictions.</p><p>By integrating these topics through various perspectives, textual descriptions, diagrams, mathematical formulas, and pseudo-code, the book ensures that you're not just passively reading but actively engaging with the material. I cannot recommend this book enough!</p><h2>Natural Language Processing with Transformers by Lewis Tunstall, Leandro von Werra, and Thomas Wolf</h2><p>Transformers aren&#8217;t as easy to understand as they might initially seem. To understand transformer models, you need to have a good background not only in Neural Networks but also the recent history of AI development and the models that preceded the transformer model. That&#8217;s why I love this <a href="https://www.amazon.com/Natural-Language-Processing-Transformers-Revised/dp/1098136799/">book</a>. It starts at the very beginning, covering the evolution of natural language processing. Without this precursor, it is very difficult to understand <em>Self-Attention</em> mechanisms that are deployed in the latest models like GPT-4 or Gemini.</p><p>It breaks down the transformer architecture introduced by Vaswani et al., explaining self-attention mechanisms and how they differ fundamentally from previous sequence-to-sequence models.</p><p>The book details procedures for domain adaptation where a pre-trained model like BERT or GPT is fine-tuned for specific tasks, such as sentiment analysis or question answering, which are crucial for tailoring generic models to specialised tasks.</p><p>Techniques like <em>model distillation, pruning,</em> and <em>quantisation</em> are explored to optimise transformers for operational efficiency. These processes are essential for deploying robust NLP models in production environments where resource constraints are a consideration.</p><p>The authors also delve into advanced topics such as cross-lingual transfer learning and strategies for dealing with scenarios where labelled data is scarce.</p><p>Practical guidance is given on how to scale transformer training to multiple GPUs and manage distributed computing environments, making it a valuable resource for projects requiring large-scale computation.</p><p>It also addresses the integration of transformer models into applications, a process that involves fine-tuning, managing data loaders, and customising loss functions and optimisers, all of which are crucial for successful NLP application development.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/summer-reads-for-nerds-engineers?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Like what you&#8217;re reading? Don&#8217;t forget to share!</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/summer-reads-for-nerds-engineers?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/summer-reads-for-nerds-engineers?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h2>Destination Mars by Andrew May</h2><p>Shifting gears a bit (or a lot), let&#8217;s talk about another recent curiosity of mine that I have been reading about - Mars. In this <a href="https://www.amazon.com/Destination-Mars-Story-Conquer-Planet/dp/1785782258/">book</a>, Andrew May details the technological and scientific challenges that humanity faces in its quest to reach Mars. This book is both a historical recount of our fascination with the red planet and also as a roadmap for the formidable journey ahead.</p><p>May begins by outlining the immense distances and harsh conditions that define the Martian landscape, which set the stage for the engineering marvels required to make Mars accessible. He discusses the various spacecraft and rovers that have been part of Mars exploration, from <em>Viking</em> to <em>Curiosity,</em> and highlights how each mission has built upon the last in terms of technology and knowledge.</p><p>A significant portion of the book is dedicated to the propulsion technologies required for a manned mission to Mars. As an engineer, I appreciate May's in-depth analysis of chemical rockets, ion thrusters, and the potential of nuclear propulsion. The trade-offs between thrust and efficiency, and the implications for spacecraft design, are particularly fascinating.</p><p>May also addresses the life support systems essential for sustaining humans on Mars. He examines the current state of closed ecological systems, waste recycling, and the challenges of producing food in a non-Earth environment. This discussion is crucial, as it ties directly into the sustainability of long-term space habitation.</p><p>In the latter sections, May speculates on the future settlements on Mars. He discusses habitat designs, from inflatable modules to 3D-printed structures using Martian regolith.&nbsp;</p><p>While the book doesn&#8217;t directly address this issue in great detail, I have been thinking deeply about gravity. Mars has about 35% the measure of Earth&#8217;s gravity. This is a significant hurdle. Elon Musk&#8217;s idea of nuking the poles to heat the surface and create an atmosphere is only feasible if gravity is strong enough to hold the atmosphere in place. There is no simple answer (or any answer as of yet) to the gravity question. I have been thinking of writing an article on this to explore this properly.</p><h2>Blockchain Technology: Algorithms and Applications by Asharaf S. et al.</h2><p>Have you ruled out Blockchain? If so, then I have two things to say to you. One, you are not alone. And two, you might be very wrong about it. I have my reasons for saying this that I intend to cover in a later article, but for now, I would recommend you continue reading up on blockchain. Don&#8217;t be deceived by the crypto crowd and their shenanigans, the underlying idea of blockchain is pretty sound and in my opinion, here to stay.&nbsp;</p><p>What I liked about this <a href="https://www.amazon.in/Blockchain-Technology-Algorithms-Applications-Asharaf/dp/9357461728/">book</a> is that&nbsp; it does not merely skim the surface of blockchain like most of the previous textbooks that I have encountered. It dives deep into the mesh of algorithms that form the backbone of this transformative technology.</p><p>The authors begin with a great introduction to the cryptographic principles underlying blockchain. They lay out the workings of hash functions, digital signatures, and consensus mechanisms, with a focus on how these elements ensure security and integrity within a distributed network.&nbsp;</p><p>The detailed analysis of consensus algorithms like <em>Proof of Work (PoW)</em> and <em>Proof of Stake (PoS)</em> is very well covered. The book then pivots to a discussion on the development of smart contracts. The authors meticulously describe the programming logic and the operational environment of smart contracts, primarily focusing on Ethereum's Solidity language. They provide code snippets and real-world scenarios that illustrate how smart contracts automate transactions and enforce agreements without intermediaries.</p><p>A significant portion of the text is dedicated to exploring various applications of blockchain technology beyond cryptocurrencies which I greatly appreciated. From supply chain management and healthcare to voting systems and financial services, the authors lay out case studies that show blockchain's potential to revolutionise industries by enhancing transparency and trust.&nbsp;</p><h2>Blockchain Tethered AI by Karen Kilroy</h2><p>Continuing the reading on blockchain, my recommendation for a followup reading would be this book. Kilroy masterfully outlines the synergistic potential of these technologies, emphasising how their convergence could redefine the paradigms of security, transparency, and autonomy in digital systems.</p><p>The <a href="https://www.amazon.com/Blockchain-Tethered-Trackable-Artificial-Intelligence/dp/1098130480/">book</a> opens with a foundational discussion on blockchain technology, revisiting its core components such as decentralised ledgers, consensus algorithms, and smart contracts. Kilroy then introduces artificial intelligence, focusing on machine learning algorithms and their capacity for data analysis and decision-making. I appreciate her methodical approach in linking these explanations to the concept of <em>'Blockchain Tethered AI',</em> which posits that blockchain can serve as a secure, transparent foundation for AI operations.</p><p>Kilroy dives deep into the potential applications of this hybrid technology. One standout section involves the use of blockchain to record and verify AI decisions in real-time, which not only enhances trust in AI systems but also creates an immutable audit trail. This aspect is particularly intriguing as it suggests a framework where AI's decision-making processes are both verifiable and non-repudiable, addressing significant concerns in critical areas such as finance and healthcare.</p><p>Another compelling discussion in the book revolves around the enhancement of AI's learning processes through blockchain. Kilroy presents a scenario where decentralised data on blockchains can be used to train AI models without compromising data privacy or security. This approach leverages blockchain's data integrity features to ensure that the data used for training AI is accurate and tamper-proof, potentially revolutionising how data is shared and utilised across industries.</p><p>Towards the end, Kilroy speculates on future developments, such as the creation of autonomous decentralised organisations run by AI on blockchain platforms. It's quite interesting all in all.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/subscribe?"><span>Subscribe now</span></a></p><h2>A Brief History of Intelligence by Max Bennet</h2><p>Max Bennet takes us on an enlightening journey in this book, covering the evolution of intelligence, from its biological roots to its most recent developments in artificial intelligence. This book is not just history, but also deeply analytical, posing significant questions about the nature of intelligence itself.</p><p>The <a href="https://www.amazon.com/Brief-History-Intelligence-Humans-Breakthroughs/dp/0063286343/">book</a> begins with biological intelligence, discussing how cognitive functions have developed across different species over millions of years. The neurological underpinnings of intelligence are covered, explaining complex concepts like neural plasticity and cognitive architecture in a manner that resonates with engineers.</p><p>After laying this sound foundation, Bennet smoothly transitions to artificial intelligence, providing a thorough review of its historical progression, from early theories and models in the mid-20th century to the sophisticated algorithms we see today. He explores key milestones, such as the development of neural networks and the impact of machine learning techniques, which have enabled AI systems to perform tasks ranging from pattern recognition to decision making under uncertainty.</p><p>A core theme in this part of the book is the parallel drawn between biological and artificial intelligence. Bennet discusses how modern AI research increasingly seeks to mimic human cognitive processes, such as learning and problem-solving, through deep learning and reinforcement learning. If you read my works regularly, then you know that I have great apprehensions about this idea. But as an engineer, I find this comparison intriguing and can&#8217;t help but be drawn to its nuances.</p><h2>Rust Atomics and Locks by Mara Bos</h2><p>This <a href="https://www.amazon.com/Rust-Atomics-Locks-Low-Level-Concurrency/dp/1098119444/">book</a> is part of my ongoing venture of infiltrating into the Rustacian clan. Although a big supporter of Rust, I have yet to actively use it in one of my projects and I am still exploring avenues of interest for myself. If you are on the same boat (or a similar boat), then hop on and read this one.</p><p>The focus of this book is on the concurrency features of the Rust programming language, which are important for writing safe and efficient multi-threaded code. The book meticulously explores the mechanisms Rust offers to manage memory safely in concurrent systems, which is particularly relevant given Rust&#8217;s emphasis on safety and performance.</p><p>The author begins with an introduction to Rust&#8217;s memory safety guarantees, particularly its <em>ownership, borrowing,</em> and <em>lifetime</em> principles, which naturally lead into a discussion on concurrency. This foundation is crucial for understanding how Rust attempts to prevent data races and other concurrency errors that are common in systems programming.</p><p>The book provides a deep dive into Rust&#8217;s atomic types in the std::sync::atomic module. It explains the concept of atomics as low-level primitives that provide safe shared access to mutable data. The detailed coverage of different atomic operations such as load, store, fetch_add, and fetch_sub, and how these can be used to build non-blocking data structures, is particularly insightful. The discussion on memory ordering options&#8212;Relaxed, Acquire, Release, AcqRel, and SeqCst&#8212;and their implications on the visibility of side effects across threads both comprehensive and critical for designing concurrent applications.</p><p>The section on locks is equally thorough. It contrasts the use of mutexes, read-write locks, and condition variables with the lock-free techniques described earlier. The book details how std::sync::Mutex and std::sync::RwLock can be used to protect data with exclusive access, including practical code examples that illustrate common patterns and potential pitfalls such as deadlock scenarios.</p><p>One of the standout features of this book are the litany of code examples that illustrate both correct and problematic usage. This is super helpful for a Rust newbie such as myself.</p><h2>Command Line Rust by Ken Younes Clark</h2><p>I love designing CLIs. I can share with you a list of logical and practical reasons for it but simply put, I find them super cool! There is something really &#8216;engiineery&#8217; about working on a terminal that can never be replaced by a UI. As someone who has designed plenty of CLIs in his lifetime, I am only naturally drawn to potentially do the same with Rust.</p><p>This <a href="https://www.amazon.com/Command-Line-Rust-Project-Based-Primer-Writing/dp/1098109430/">book</a> provides a comprehensive tour of Rust&#8217;s capabilities, particularly its safety features and performance, which are key for developing command-line tools.</p><p>The book starts with an introduction to Rust&#8217;s syntax and basic concepts, such as ownership, types, and error handling, setting a solid foundation even for readers new to the language such as myself. Clark emphasises the importance of Rust's type system and compiler checks, which ensure reliability and maintainability of code.</p><p>As I progress through the book, I appreciate Clark's focus on practical application. He guides readers through setting up a Rust development environment and crafting a basic "Hello, world!" application. From there, he delves into more complex topics such as parsing command-line arguments using the clap crate, which simplifies the process of handling user inputs.</p><p>One of the sections that I appreciated the most was on error handling in Rust. It covered how to manage expected and unexpected errors gracefully. Clark provides examples using the Result and Option types to handle different failure modes, which is crucial for building resilient applications that can cope with a variety of user input errors and system failures.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/summer-reads-for-nerds-engineers?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/summer-reads-for-nerds-engineers?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2>The Staff Engineer&#8217;s Path by Tanya Reilly</h2><p>Alright, stepping out of the CLI and jumping into office politics. Jokes apart, it is important that we talk about the role of a staff engineer. Tanya Reilly does a <a href="https://www.amazon.com/Staff-Engineers-Path-Individual-Contributors/dp/B0C7SFBXW3/">splendid job</a> defining the responsibilities of a staff engineer.&nbsp;</p><p>Reilly begins by defining what a staff engineer is and what they are not, setting clear expectations about the role. She emphasises that becoming a staff engineer involves much more than just technical excellence; it requires a strategic vision, the ability to influence without authority, and a commitment to mentoring others. This introduction sets the stage for understanding the depth and breadth of impact a staff engineer is expected to have within an organisation. If you are an engineer worried about AI taking away your job, I actually recommend reading this one because if that were to really happen (although I don&#8217;t subscribe to that possibility), then a staff engineer is probably the safest roles to hold.</p><p>One of the core chapters discusses the key skills necessary for the role. It&#8217;s not just about coding; Reilly highlights skills such as architectural insight, system design, technical decision-making, and cross-functional leadership. Each skill is explored in detail, with practical advice on how to develop and demonstrate these capabilities in a work setting. I felt it was a must read for engineers.</p><p>Reilly also gives advice on building relationships with key stakeholders, understanding business goals, and leveraging technical strategies to drive organisational success. These elements are critical, as they illustrate how staff engineers can use their position to bridge the gap between engineering teams and executive management.</p><p>Reilly also addresses the challenges and common pitfalls in the path to becoming a staff engineer, such as the risk of becoming isolated in deep technical work without broader influence. She offers strategies for maintaining visibility in the organisation, such as leading high-impact projects and engaging in community and industry events.</p><p>Towards the end, she shifts focus towards long-term career growth, discussing how to sustain the role's demands while continuing personal and professional development.</p><h2>The Inner Game of Tennis by Timothy Gallwey</h2><p>Finally, I would like to wrap up my recommendations with some &#8216;philosophy of life&#8217;. I loved this <a href="https://www.amazon.com/The-Inner-Game-of-Tennis-audiobook/dp/B0012FK22S/">book</a> primarily because of Tennis metaphor. Being a huge fan of the sport, I could seriously relate to the &#8216;mental&#8217; game that the author talks about in detail. Also, apparently this is one of Bill Gates&#8217;s favourite reads as well. Given that he is another Tennis fan, I am not surprised.&nbsp;</p><p>It's a profound exploration of the psychology of performance, which resonates deeply with me as an engineer. Gallwey introduces a revolutionary approach to learning and coaching that is as applicable to mastering a programming language as it is to hitting a forehand.</p><p>Gallwey starts with the premise that our minds have two selves, <em>Self 1,</em> the <em>teller,</em> and <em>Self 2,</em> the <em>doer.</em> <em>Self 1</em> is analytical and often critical, constantly offering instructions and criticisms, while <em>Self 2</em> is the self that actually performs the action. He argues that learning and performance are enhanced when <em>Self 1</em> is quieted and allows <em>Self 2</em> to operate based on the body's intuitive and natural abilities.&nbsp;</p><p>A significant portion of the book is dedicated to the concept of <em>"letting go"</em> of judgement and the fear of failure, which leads to greater focus and a higher level of performance. Gallwey provides practical strategies for achieving this state, such as focusing on the seams of the tennis ball, which parallels focusing on the process rather than the outcome of a complex project or coding challenge.</p><p>Gallwey also delves into the concept of <em>"trust,"</em> which involves allowing one&#8217;s own abilities to flourish without over-management. It also touches on effective coaching techniques. Gallwey advocates for coaches (or leaders) to facilitate learning by creating an environment that is conducive to self-discovery rather than instruction-heavy.&nbsp;</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/1VHsTGIXow2uNrErp8rxcBx8hEX5kBJSZ/view?usp=sharing&quot;,&quot;text&quot;:&quot;Download Article as PDF&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/1VHsTGIXow2uNrErp8rxcBx8hEX5kBJSZ/view?usp=sharing"><span>Download Article as PDF</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arkinfo Notes! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/summer-reads-for-nerds-engineers/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/summer-reads-for-nerds-engineers/comments"><span>Leave a comment</span></a></p>]]></content:encoded></item><item><title><![CDATA[Using Model Averaging to Enhance Deep Learning]]></title><description><![CDATA[Understanding and Implementation of Model Averaging Ensemble Techniques in Deep Learning Models with a Hypothetical Case Study]]></description><link>https://notes.arkinfo.xyz/p/using-model-averaging-to-enhance</link><guid isPermaLink="false">https://notes.arkinfo.xyz/p/using-model-averaging-to-enhance</guid><dc:creator><![CDATA[Mir H. S. Quadri]]></dc:creator><pubDate>Mon, 29 Apr 2024 00:30:58 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!QTCK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc47c154c-9a55-48f3-9f85-01996fe299d5_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Readers,</p><p>Today I have special article for all the machine learning enthusiasts (which is basically most of my readers &#129299;). I've been thinking a lot lately about <strong>model averaging.</strong> </p><p><strong>Model averaging</strong> is a transformative approach that can drastically improve the reliability and accuracy of our predictive models. Given its importance, I've crafted a <a href="https://drive.google.com/file/d/18bj89zmxB4mhl_SkUTsoHraw6bwixCyd/view?usp=sharing">free detailed guide</a> that begins with the foundational theories and extends into practical applications across industries. </p><p><strong>In this article, you will find:</strong></p><ul><li><p><strong>Theoretical Insights:</strong> Understanding the core principles that make model averaging a critical tool for reducing variance and bias in predictions.</p></li><li><p><strong>Practical Case Studies:</strong> I take you through specific scenarios in fields like weather forecasting, where model averaging not only enhances accuracy but also provides a buffer against unpredictability.</p></li><li><p><strong>Technical Deep Dive:</strong> For those of you who thrive on the technical aspects, I've included detailed examinations of model architectures and the algorithms that dynamically adjust their performance based on real-time data.</p></li><li><p><strong>Challenges and Solutions:</strong> No exploration is complete without addressing the challenges. I discuss the potential pitfalls and the strategic maneuvers to overcome them.</p></li></ul><p>Whether you're directly involved in machine learning projects or simply interested in the cutting-edge of predictive analytics, there's something in this study for you. <strong>It&#8217;s available for download at NO cost</strong>. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/18bj89zmxB4mhl_SkUTsoHraw6bwixCyd/view?usp=sharing&quot;,&quot;text&quot;:&quot;DOWNLOAD PDF FOR FREE&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/18bj89zmxB4mhl_SkUTsoHraw6bwixCyd/view?usp=sharing"><span>DOWNLOAD PDF FOR FREE</span></a></p><p>I invite you to share your feedback and insights on this articles. Thank you for your continuous support and curiosity, which inspire me to dig deeper and bring these insights to our community.</p><p>Warm regards,</p><p><a href="https://mirhsquadri.com/">Mir H. S. Quadri</a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QTCK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc47c154c-9a55-48f3-9f85-01996fe299d5_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QTCK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc47c154c-9a55-48f3-9f85-01996fe299d5_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!QTCK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc47c154c-9a55-48f3-9f85-01996fe299d5_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!QTCK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc47c154c-9a55-48f3-9f85-01996fe299d5_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!QTCK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc47c154c-9a55-48f3-9f85-01996fe299d5_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QTCK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc47c154c-9a55-48f3-9f85-01996fe299d5_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c47c154c-9a55-48f3-9f85-01996fe299d5_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!QTCK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc47c154c-9a55-48f3-9f85-01996fe299d5_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!QTCK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc47c154c-9a55-48f3-9f85-01996fe299d5_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!QTCK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc47c154c-9a55-48f3-9f85-01996fe299d5_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!QTCK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc47c154c-9a55-48f3-9f85-01996fe299d5_1000x1000.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Artwork - Model Averaging Across Multiple Neural Networks</figcaption></figure></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arkinfo Notes! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/using-model-averaging-to-enhance/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/using-model-averaging-to-enhance/comments"><span>Leave a comment</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/using-model-averaging-to-enhance?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/using-model-averaging-to-enhance?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p>]]></content:encoded></item><item><title><![CDATA[A New Schedule]]></title><description><![CDATA[An Update Regarding the Re-Scheduling of the Weekly Special Edition Articles at Arkinfo Notes]]></description><link>https://notes.arkinfo.xyz/p/a-new-schedule</link><guid isPermaLink="false">https://notes.arkinfo.xyz/p/a-new-schedule</guid><dc:creator><![CDATA[Mir H. S. Quadri]]></dc:creator><pubDate>Fri, 19 Apr 2024 08:02:23 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!JT81!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01fc7016-2448-4aa9-881b-7a6e38b28f05_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Readers,</p><p>As you know, this April marks the completion of our first year together on Substack. In the span of this year alone, I have managed to publish close to <strong>100 articles</strong> across both <strong><a href="https://notes.arkinfo.xyz">Arkinfo Notes</a></strong> and <strong><a href="https://notebook.lumeni.xyz">The Lumeni Notebook</a></strong> </p><p>It has been an incredibly rewarding experience and I am grateful to all my subscribers and fellow writers who have buoyed me throughout.</p><p>Going forward, I shall continue my journey on this platform, <strong>focussing specifically on sharing content that is long form and extremely high quality both in terms of its depth of research and insights for my readers.</strong></p><p>This will require me to dedicate more time and effort behind each article and a weekly publishing routine may not be the perfect fit for this approach. So starting next week, I will reorganise the publishing schedule for both the substacks from a <strong>weekly to a bi-weekly basis,</strong> i.e., 2 articles a month instead of 4 articles on each Substack.</p><p>I believe this is best approach, both for my readers as well as the future of these substacks. The quality of each article will drastically improve as a result and make the work more rewarding and durable in the long run.</p><p>Thank you all so much for supporting me throughout this journey and I look forward to another great year on Substack. Cheers!</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JT81!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01fc7016-2448-4aa9-881b-7a6e38b28f05_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JT81!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01fc7016-2448-4aa9-881b-7a6e38b28f05_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!JT81!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01fc7016-2448-4aa9-881b-7a6e38b28f05_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!JT81!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01fc7016-2448-4aa9-881b-7a6e38b28f05_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!JT81!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01fc7016-2448-4aa9-881b-7a6e38b28f05_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JT81!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01fc7016-2448-4aa9-881b-7a6e38b28f05_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/01fc7016-2448-4aa9-881b-7a6e38b28f05_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:539212,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!JT81!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01fc7016-2448-4aa9-881b-7a6e38b28f05_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!JT81!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01fc7016-2448-4aa9-881b-7a6e38b28f05_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!JT81!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01fc7016-2448-4aa9-881b-7a6e38b28f05_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!JT81!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01fc7016-2448-4aa9-881b-7a6e38b28f05_1000x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arkinfo Notes! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Higgs Boson]]></title><description><![CDATA[Discussing the Significance of the Higgs Boson and the CERN LHC]]></description><link>https://notes.arkinfo.xyz/p/the-higgs-boson</link><guid isPermaLink="false">https://notes.arkinfo.xyz/p/the-higgs-boson</guid><dc:creator><![CDATA[Mir H. S. Quadri]]></dc:creator><pubDate>Mon, 15 Apr 2024 00:30:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!oMwK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d2d1908-1f87-47d8-906d-e2d22698847c_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The world has just bid farewell to <strong>Peter Higgs</strong> at the age of 94. This moment marks the end of an era in theoretical physics. Few names resonate in this field as profoundly as that of his. Peter Higgs&#8217; groundbreaking work paved the way for one of the most significant scientific discoveries of our time - the Higgs Boson.</p><p>Keeping true to Higgs&#8217; own personal preference in this regard, I shall not sensationalise or oversimplify his discovery by referring to it as the <em>&#8216;God Particle&#8217;.</em> Instead, we will focus on discussing the very essence of Higgs' thesis and the machinery of the Large Hadron Collider (LHC) at CERN that turned theory into reality.&nbsp;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!oMwK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d2d1908-1f87-47d8-906d-e2d22698847c_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!oMwK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d2d1908-1f87-47d8-906d-e2d22698847c_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!oMwK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d2d1908-1f87-47d8-906d-e2d22698847c_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!oMwK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d2d1908-1f87-47d8-906d-e2d22698847c_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!oMwK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d2d1908-1f87-47d8-906d-e2d22698847c_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!oMwK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d2d1908-1f87-47d8-906d-e2d22698847c_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8d2d1908-1f87-47d8-906d-e2d22698847c_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!oMwK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d2d1908-1f87-47d8-906d-e2d22698847c_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!oMwK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d2d1908-1f87-47d8-906d-e2d22698847c_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!oMwK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d2d1908-1f87-47d8-906d-e2d22698847c_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!oMwK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d2d1908-1f87-47d8-906d-e2d22698847c_1000x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Artwork - The Higgs Boson</figcaption></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/subscribe?"><span>Subscribe now</span></a></p><h2>What was Higgs&#8217; Thesis?</h2><p><em>How do particles acquire mass? </em>This was a major question in theoretical physics until the arrival of Peter Higgs. The answer, as proposed by Higgs and independently by several other physicists in the 1960s, revolves around the concept of <em>spontaneous symmetry</em> breaking in quantum fields.</p><p>The <em>Standard Model</em> of particle physics, our best theory describing the fundamental forces and particles, relies heavily on the concept of <em>symmetry.</em> Yet, for the universe to function as we observe, some of these symmetries must be broken. The mechanism that facilitates this symmetry breaking, now famously known as the <em>Higgs mechanism,</em> provides a framework through which particles gain mass.</p><p>To understand this, we will have to discuss <em>quantum field theory.</em> Every particle is associated with a <em>field</em> that permeates all of space. The <em>Higgs field</em> is such a field, unique in that it has a <em>non-zero value</em> even in its lowest energy state, or vacuum state. <strong>As particles interact with this field, they acquire mass.</strong> The interaction strength or coupling with the Higgs field determines the mass of a particle. This can be expressed through the following equation, where represents the Higgs field, and denotes the mass of a fundamental particle</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VTyP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa80a374-f8b8-4201-9eee-a118136d6f9f_663x134.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VTyP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa80a374-f8b8-4201-9eee-a118136d6f9f_663x134.png 424w, https://substackcdn.com/image/fetch/$s_!VTyP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa80a374-f8b8-4201-9eee-a118136d6f9f_663x134.png 848w, https://substackcdn.com/image/fetch/$s_!VTyP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa80a374-f8b8-4201-9eee-a118136d6f9f_663x134.png 1272w, https://substackcdn.com/image/fetch/$s_!VTyP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa80a374-f8b8-4201-9eee-a118136d6f9f_663x134.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VTyP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa80a374-f8b8-4201-9eee-a118136d6f9f_663x134.png" width="153" height="30.923076923076923" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fa80a374-f8b8-4201-9eee-a118136d6f9f_663x134.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:134,&quot;width&quot;:663,&quot;resizeWidth&quot;:153,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!VTyP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa80a374-f8b8-4201-9eee-a118136d6f9f_663x134.png 424w, https://substackcdn.com/image/fetch/$s_!VTyP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa80a374-f8b8-4201-9eee-a118136d6f9f_663x134.png 848w, https://substackcdn.com/image/fetch/$s_!VTyP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa80a374-f8b8-4201-9eee-a118136d6f9f_663x134.png 1272w, https://substackcdn.com/image/fetch/$s_!VTyP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa80a374-f8b8-4201-9eee-a118136d6f9f_663x134.png 1456w" sizes="100vw"></picture><div></div></div></a></figure></div><p>Here, represents the coupling constant, a parameter that varies between different types of particles.</p><p>The existence of the Higgs field, while solving the mass puzzle, predicted a corresponding quantum, i.e., <em>the Higgs Boson,</em> as a manifestation of field fluctuations.&nbsp;</p><p>This approach to mass and symmetry breaking was a game changer. It suggested that the <strong>properties of the vacuum itself, through the Higgs field, could impart mass to other particles.</strong>&nbsp;</p><h2>The Large Hadron Collider</h2><p>The Large Hadron Collider (LHC) is the world's largest and most powerful particle accelerator. It was designed to recreate conditions a fraction of a second after the Big Bang, in order to observe particles like the Higgs Boson.&nbsp;</p><p>The collider's construction was an engineering feat in itself, consisting of a 27-kilometre ring of superconducting magnets, cooled to temperatures colder than outer space, to guide and accelerate protons to near the speed of light.</p><p>Protons are accelerated in opposite directions in the LHC's dual ring structure, gaining energy with each pass until they collide at points where massive detectors are positioned. These detectors, including <em>ATLAS</em> and <em>CMS,</em> are designed to observe the aftermath of proton-proton collisions, searching for new particles among the outcomes.</p><h3>ATLAS and CMS</h3><p>The ATLAS (A Toroidal LHC ApparatuS) and CMS (Compact Muon Solenoid) experiments were specifically designed to detect the presence of the Higgs Boson, among other research goals. These detectors work by tracking the particles produced in collisions, with each component designed to measure different types of particles and energies. The complexity of these experiments necessitate high precision in both the equipment and the analysis of the data they produce.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!iYY0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9d25133-75ac-4b22-b69d-4bf4689ba450_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!iYY0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9d25133-75ac-4b22-b69d-4bf4689ba450_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!iYY0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9d25133-75ac-4b22-b69d-4bf4689ba450_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!iYY0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9d25133-75ac-4b22-b69d-4bf4689ba450_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!iYY0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9d25133-75ac-4b22-b69d-4bf4689ba450_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!iYY0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9d25133-75ac-4b22-b69d-4bf4689ba450_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b9d25133-75ac-4b22-b69d-4bf4689ba450_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!iYY0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9d25133-75ac-4b22-b69d-4bf4689ba450_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!iYY0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9d25133-75ac-4b22-b69d-4bf4689ba450_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!iYY0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9d25133-75ac-4b22-b69d-4bf4689ba450_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!iYY0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb9d25133-75ac-4b22-b69d-4bf4689ba450_1000x1000.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Artwork - The Large Hadron Collider</figcaption></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/the-higgs-boson?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/the-higgs-boson?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h3>The Discovery</h3><p>Finding the Higgs Boson in the humongous amount of data generated by the LHC collisions was like finding a needle in a cosmic haystack. Physicists and engineers employed sophisticated statistical methods to identify the signature of the Higgs Boson. One key concept in this search was the idea of <em>"decay channels,"</em> i.e., the specific patterns of particle decay that indicate the presence of a Higgs Boson. The decay process of the Higgs Boson into other particles, such as photons or bosons, provides indirect evidence of its existence.</p><p>The statistical significance of the observed data was measured in terms of <em>"sigma"</em> levels. <strong>The discovery of the Higgs Boson was confirmed with a sigma level of 5, indicating a less than 1 in 3.5 million chance that the observed pattern was due to random chance, rather than the presence of the Higgs Boson.</strong></p><p>This monumental task of data analysis and interpretation culminated on July 4, 2012, when teams from both ATLAS and CMS experiments announced the discovery of a new particle consistent with the Higgs Boson. This announcement was nothing short of a moon landing moment for science, showcasing what can be achieved through global collaboration and ingenuity.</p><h2>Decay Channels and Sigma Levels</h2><p>2012 was a monumental year for CERN and the field of particle physics, culminating in the awarding of the Nobel Prize in Physics to Peter Higgs and Fran&#231;ois Englert in 2013.&nbsp;</p><p>Before moving onto the recent discoveries being made at CERN by the LHC, I would like to share some short notes on <em>Decay Channels</em> and <em>Sigma Levels.</em> Although we did briefly discuss these concepts in the previous section, I find it pertinent to share some more information about them in order to underscore their significance.</p><h3>Decay Channels</h3><p>The Higgs Boson, by nature, is incredibly short-lived. Once produced in a high-energy collision within the LHC, it decays almost instantaneously into other particles. These decay channels are what experiments like ATLAS and CMS aim to detect. The primary decay channels through which the Higgs Boson was observed include its decay into two photons&nbsp;</p><p>The decay into two photons is particularly significant due to its clear signature in the detectors. The equation governing this decay process can be represented as</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!gWdc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f3c5d3f-794d-4821-a166-2b9e94eb5417_523x134.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!gWdc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f3c5d3f-794d-4821-a166-2b9e94eb5417_523x134.png 424w, https://substackcdn.com/image/fetch/$s_!gWdc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f3c5d3f-794d-4821-a166-2b9e94eb5417_523x134.png 848w, https://substackcdn.com/image/fetch/$s_!gWdc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f3c5d3f-794d-4821-a166-2b9e94eb5417_523x134.png 1272w, https://substackcdn.com/image/fetch/$s_!gWdc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f3c5d3f-794d-4821-a166-2b9e94eb5417_523x134.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!gWdc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f3c5d3f-794d-4821-a166-2b9e94eb5417_523x134.png" width="111" height="28.439770554493307" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7f3c5d3f-794d-4821-a166-2b9e94eb5417_523x134.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:134,&quot;width&quot;:523,&quot;resizeWidth&quot;:111,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!gWdc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f3c5d3f-794d-4821-a166-2b9e94eb5417_523x134.png 424w, https://substackcdn.com/image/fetch/$s_!gWdc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f3c5d3f-794d-4821-a166-2b9e94eb5417_523x134.png 848w, https://substackcdn.com/image/fetch/$s_!gWdc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f3c5d3f-794d-4821-a166-2b9e94eb5417_523x134.png 1272w, https://substackcdn.com/image/fetch/$s_!gWdc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f3c5d3f-794d-4821-a166-2b9e94eb5417_523x134.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>While the decay into four leptons, through an intermediate pair of <em>Z</em> bosons, offers a distinct and measurable final state, represented as</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dCbt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdef86b94-48cc-4c61-b924-2d1d105069c6_925x106.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dCbt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdef86b94-48cc-4c61-b924-2d1d105069c6_925x106.png 424w, https://substackcdn.com/image/fetch/$s_!dCbt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdef86b94-48cc-4c61-b924-2d1d105069c6_925x106.png 848w, https://substackcdn.com/image/fetch/$s_!dCbt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdef86b94-48cc-4c61-b924-2d1d105069c6_925x106.png 1272w, https://substackcdn.com/image/fetch/$s_!dCbt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdef86b94-48cc-4c61-b924-2d1d105069c6_925x106.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dCbt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdef86b94-48cc-4c61-b924-2d1d105069c6_925x106.png" width="188" height="21.543783783783784" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/def86b94-48cc-4c61-b924-2d1d105069c6_925x106.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:106,&quot;width&quot;:925,&quot;resizeWidth&quot;:188,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dCbt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdef86b94-48cc-4c61-b924-2d1d105069c6_925x106.png 424w, https://substackcdn.com/image/fetch/$s_!dCbt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdef86b94-48cc-4c61-b924-2d1d105069c6_925x106.png 848w, https://substackcdn.com/image/fetch/$s_!dCbt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdef86b94-48cc-4c61-b924-2d1d105069c6_925x106.png 1272w, https://substackcdn.com/image/fetch/$s_!dCbt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdef86b94-48cc-4c61-b924-2d1d105069c6_925x106.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>These channels provided the clearest evidence of the Higgs Boson's existence, with the data from these decay processes meticulously analysed to ensure the observed particles were indeed the products of a Higgs Boson decay.</p><h3>Sigma Levels</h3><p>In particle physics, the certainty of a discovery is often measured in <em>"sigma"</em> levels. This is a statistical measure indicating how unlikely it is that an observed effect is due to chance. The gold standard for a discovery in particle physics is <em>5 sigma,</em> which corresponds to a probability of about 1 in 3.5 million that the result is a fluke.</p><p>The announcement of the Higgs Boson's discovery was made when both the ATLAS and CMS experiments independently reached and surpassed this 5 sigma threshold, providing compelling evidence for the Higgs Boson's existence. This rigorous statistical analysis was crucial in confirming that the signals detected were not merely anomalies but indicative of the Higgs Boson.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/the-higgs-boson?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/the-higgs-boson?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2>Recent Discoveries after The Higgs Boson</h2><p>The LHC experiments, especially the LHCb (Large Hadron Collider beauty) experiment, have been at the forefront of discovering new particles.</p><p>The LHCb collaboration has observed three never-before-seen particles. A new kind of <em>"pentaquark"</em> and the first-ever pair of <em>"tetraquarks"</em>. These discoveries are crucial as they provide insights into how quarks bind together to form composite particles, expanding our knowledge beyond the conventional hadrons made up of two or three quarks.</p><p><strong>Over the past decade, more than 59 new hadrons have been identified.</strong> This has helped shed light on the complex interactions governed by <em>quantum chromodynamics (QCD), </em>the theory describing the strong force that holds quarks together inside hadrons. These discoveries include both excited states of known <em>baryons</em> and <em>mesons.</em>&nbsp;</p><p>This <em>"particle zoo 2.0"</em> not only tests the limits of the quark model but also enhances our understanding of the strong interaction, crucial for accurately modelling collisions at the LHC and potentially hinting at new physics phenomena.</p><p>These experiments have also provided invaluable insights into the interactions of the Higgs boson with other particles, confirming that <em>W</em> and <em>Z</em> bosons, as well as the heaviest fermions like the <em>top quark, bottom quark,</em> and <em>tau lepton,</em> obtain their mass through interactions with the Higgs field&#8203; (CERN)&#8203;.</p><h2>The Future of Particle Physics</h2><p>The identification of the Higgs Boson was a milestone marking the completion of the Standard Model of particle physics. But as is the case with every discovery, it also presents new puzzles.&nbsp;</p><p>One of the most pressing questions is <strong>the nature of dark matter,</strong> which makes up about 27% of the universe but doesn't interact with the electromagnetic force, making it invisible and detectable only through its gravitational effects. <strong>The Standard Model, even with the Higgs Boson, does not account for dark matter.</strong></p><p>Another significant area of research is <strong>the imbalance between matter and antimatter in the universe.</strong> Theories and experiments, including those at the LHC, are actively investigating why the universe is dominated by matter, despite the expectation that the Big Bang should have produced equal amounts of matter and antimatter.</p><p>The discovery of the Higgs Boson has also spurred the development of new particle accelerators and experiments designed to explore these unanswered questions. Future projects, such as the High-Luminosity LHC (HL-LHC) and proposed colliders like the Future Circular Collider (FCC) and the International Linear Collider (ILC), aim to provide higher energy levels and collision rates. These enhancements are crucial for probing deeper into the Standard Model and beyond, searching for evidence of supersymmetry, extra dimensions, or other phenomena that could revolutionise our understanding of the universe.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/1GA7a5WtyO7eSuY7uz17rS_Rd2Qv866Lp/view?usp=sharing&quot;,&quot;text&quot;:&quot;Download Article as PDF&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/1GA7a5WtyO7eSuY7uz17rS_Rd2Qv866Lp/view?usp=sharing"><span>Download Article as PDF</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arkinfo Notes! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/the-higgs-boson/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/the-higgs-boson/comments"><span>Leave a comment</span></a></p>]]></content:encoded></item><item><title><![CDATA[Is Deep Space Silent?]]></title><description><![CDATA[Understanding the Nature of Sound and the Possibilities of Its Existence in Deep Space]]></description><link>https://notes.arkinfo.xyz/p/is-deep-space-silent</link><guid isPermaLink="false">https://notes.arkinfo.xyz/p/is-deep-space-silent</guid><dc:creator><![CDATA[Mir H. S. Quadri]]></dc:creator><pubDate>Mon, 08 Apr 2024 12:30:23 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Nreh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb64a0988-054f-4854-9504-17d4ae4a4d8d_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Is deep space silent? Many might respond with a straightforward <em>Yes.</em> But there is more to it than that. To understand the presence of sound in deep space, we will have to rollback to the fundamentals such as the nature of sound, the mechanics of its propagation, and the realm we refer to as deep space.</p><p>The objective of this article is twofold. First, to dissect the concept of sound from a scientific perspective by discussing its mechanics, its modes of travel, and its manifestations in different mediums. Second, to extend this understanding beyond the confines of our atmosphere, into deep space.</p><p>By indulging in this inquiry we can broaden our horizons of understanding the conditions under which sound, or phenomena similar to sound, might exist. In doing so, we might just realise that even in the vacuum of space, the universe might not be as silent as it appears.&nbsp;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Nreh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb64a0988-054f-4854-9504-17d4ae4a4d8d_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Nreh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb64a0988-054f-4854-9504-17d4ae4a4d8d_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!Nreh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb64a0988-054f-4854-9504-17d4ae4a4d8d_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!Nreh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb64a0988-054f-4854-9504-17d4ae4a4d8d_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!Nreh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb64a0988-054f-4854-9504-17d4ae4a4d8d_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Nreh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb64a0988-054f-4854-9504-17d4ae4a4d8d_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b64a0988-054f-4854-9504-17d4ae4a4d8d_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Nreh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb64a0988-054f-4854-9504-17d4ae4a4d8d_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!Nreh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb64a0988-054f-4854-9504-17d4ae4a4d8d_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!Nreh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb64a0988-054f-4854-9504-17d4ae4a4d8d_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!Nreh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb64a0988-054f-4854-9504-17d4ae4a4d8d_1000x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Artwork - Sound waves in Deep Space</figcaption></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/subscribe?"><span>Subscribe now</span></a></p><h2>What is Sound?</h2><p>At its core, sound is a manifestation of the vibrancy of the universe, a mechanical wave that necessitates a medium to propagate. This wave is born from <em>vibrations </em>i.e.,<em> </em>disturbances that displace particles within a <em>medium</em>, be it gas, liquid, or solid, thus transferring energy through the medium in a <em>wave pattern.</em>&nbsp;</p><p>Formally, we describe sound as a series of <em>pressure variations, </em>or <em>acoustic waves,</em> that can be represented mathematically by the equation</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BUov!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03977b54-3be2-4fa0-9682-ce8610f7fd89_1557x150.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BUov!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03977b54-3be2-4fa0-9682-ce8610f7fd89_1557x150.png 424w, https://substackcdn.com/image/fetch/$s_!BUov!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03977b54-3be2-4fa0-9682-ce8610f7fd89_1557x150.png 848w, https://substackcdn.com/image/fetch/$s_!BUov!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03977b54-3be2-4fa0-9682-ce8610f7fd89_1557x150.png 1272w, https://substackcdn.com/image/fetch/$s_!BUov!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03977b54-3be2-4fa0-9682-ce8610f7fd89_1557x150.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BUov!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03977b54-3be2-4fa0-9682-ce8610f7fd89_1557x150.png" width="416" height="40" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/03977b54-3be2-4fa0-9682-ce8610f7fd89_1557x150.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:140,&quot;width&quot;:1456,&quot;resizeWidth&quot;:416,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BUov!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03977b54-3be2-4fa0-9682-ce8610f7fd89_1557x150.png 424w, https://substackcdn.com/image/fetch/$s_!BUov!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03977b54-3be2-4fa0-9682-ce8610f7fd89_1557x150.png 848w, https://substackcdn.com/image/fetch/$s_!BUov!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03977b54-3be2-4fa0-9682-ce8610f7fd89_1557x150.png 1272w, https://substackcdn.com/image/fetch/$s_!BUov!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F03977b54-3be2-4fa0-9682-ce8610f7fd89_1557x150.png 1456w" sizes="100vw"></picture><div></div></div></a></figure></div><p>where denotes the pressure at time , is the ambient pressure, represents the pressure variation amplitude, and is the frequency of the wave.</p><p>This equation underscores the dual nature of sound as both a physical phenomenon and a perceptible sensation, delineated by the wave's frequency, which determines pitch, and amplitude, which influences loudness.&nbsp;</p><p>Sound's existence is inherently tied to the presence of matter, as it is the medium through which sound waves travel. The absence of such a medium poses a significant barrier to sound propagation. This is where the problem of the presence of sound in <em>vacuum</em> comes in.</p><p>While this may be a problem for <em>sound,</em> it's not necessarily a problem for every type of wave. The distinction between sound and other types of waves is pertinent when considering the vast reaches of space. <em>Electromagnetic waves,</em> for instance, do not require a <em>medium</em> and can traverse the <em>vacuum of space</em> unimpeded.&nbsp;</p><h2>How does Sound Travel?</h2><p>To understand the traversal of a sound wave, we must first grasp the concept of sound as a <em>mechanical wave,</em> a disturbance that propagates through a medium by the interaction of its particles. The medium's nature, be it gas, liquid, or solid, makes a difference in the speed and manner in which sound travels.</p><p>In a simplistic model, when a sound wave moves through a medium, it causes the particles of the medium to oscillate back and forth from their <em>equilibrium</em> position. This oscillation transfers energy from one particle to the next, propagating the wave through the medium. The speed of sound, denoted as , is not constant but varies with the medium's properties, most notably its density and elasticity. The general formula for the speed of sound in a medium is given by&nbsp;</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hNrP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F273868ed-77cf-4aa8-a025-6cbd64ff75d0_551x448.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hNrP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F273868ed-77cf-4aa8-a025-6cbd64ff75d0_551x448.png 424w, https://substackcdn.com/image/fetch/$s_!hNrP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F273868ed-77cf-4aa8-a025-6cbd64ff75d0_551x448.png 848w, https://substackcdn.com/image/fetch/$s_!hNrP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F273868ed-77cf-4aa8-a025-6cbd64ff75d0_551x448.png 1272w, https://substackcdn.com/image/fetch/$s_!hNrP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F273868ed-77cf-4aa8-a025-6cbd64ff75d0_551x448.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hNrP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F273868ed-77cf-4aa8-a025-6cbd64ff75d0_551x448.png" width="103" height="83.7459165154265" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/273868ed-77cf-4aa8-a025-6cbd64ff75d0_551x448.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:448,&quot;width&quot;:551,&quot;resizeWidth&quot;:103,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!hNrP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F273868ed-77cf-4aa8-a025-6cbd64ff75d0_551x448.png 424w, https://substackcdn.com/image/fetch/$s_!hNrP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F273868ed-77cf-4aa8-a025-6cbd64ff75d0_551x448.png 848w, https://substackcdn.com/image/fetch/$s_!hNrP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F273868ed-77cf-4aa8-a025-6cbd64ff75d0_551x448.png 1272w, https://substackcdn.com/image/fetch/$s_!hNrP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F273868ed-77cf-4aa8-a025-6cbd64ff75d0_551x448.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>where represents the medium's <em>modulus of elasticity </em>i.e.,&nbsp; <em>stiffness</em> and its <em>density.</em> This equation highlights how sound travels faster in media that are more <em>elastic</em> or less dense.</p><p>Air, being a gas, offers less resistance to particle movement, resulting in slower sound speeds compared to liquids and solids. The speed of sound in dry air at 20&#176;C is approximately 343 metres per second (m/s). In contrast, with water, sound travels at about 1482 m/s. In steel, it reaches speeds of up to 5941 m/s. These variations are crucial in understanding not just how sound travels but also how its speed influences our perception of sound, including phenomena such as the <em>Doppler effect.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/is-deep-space-silent?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/is-deep-space-silent?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2>What is Deep Space?</h2><p>Now in order to understand the action of sound in the void, we first need to understand this void itself. So let&#8217;s discuss Deep Space.&nbsp;</p><p>The Earth's atmosphere gradually fades into space through a series of layers, the most distant of which is the <em>exosphere.</em> Beyond this lies what we term as <em>space,</em> starting with <em>near-Earth space, </em>populated by satellites orbiting our planet. Then comes <em>interplanetary space</em> within our Solar System, marked by the presence of planets, asteroids, and the Sun's solar wind.</p><p>As we venture farther we enter <em>interstellar space.</em> This vast region, filled with the gas and dust remnants of ancient stars, stretches out to the boundaries of our galaxy, the Milky Way. Beyond this, lies <em>intergalactic space,</em> where the distances are so vast that they are measured in <em>light-years,</em> and the concepts of sound as we understand it become even more abstract.</p><p>Deep space, for the purposes of our discussion, refers to the regions beyond our immediate Solar System, covering both interstellar and intergalactic space.&nbsp;</p><p>The vacuum of this area is not so much about emptiness as it is about vast distances. Deep space has thin gas clouds, and cosmic phenomena occurring on scales beyond our comprehension.&nbsp;</p><p>In such a <em>vacuum,</em> traditional sound waves, as pressure waves mediated by the collision of molecules, find no medium to traverse. This absence of a conductive medium leads to the common assertion that <em>space is silent.</em></p><h2>Is Deep Space Silent?</h2><p>Mostly, <em>Yes.</em> However, this silence is not absolute. The problem lies with our human-centric understanding of sound as a phenomenon that requires a medium composed of atoms or molecules to travel. In deep space, the conditions are markedly different, defying our conventional perceptions of <em>sound</em> and <em>silence.</em></p><p>The density of particles in interstellar space, can be as low as a few atoms per cubic metre, compared to Earth's atmospheric density of approximately molecules per cubic metre at sea level.&nbsp;</p><p>However, the concept of silence in space is not that straightforward. Space, despite its vacuum state, is not devoid of matter entirely; it contains dust, gas, and plasma, albeit at densities far lower than on Earth.&nbsp;</p><p>These materials can, under <em>specific conditions</em>, carry vibrations or disturbances that resemble sound waves.&nbsp;Thus, while deep space may be silent in the conventional sense that human ears cannot hear sound there, sound can exist in Deep Space.&nbsp;</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/subscribe?"><span>Subscribe now</span></a></p><h2>Conditions for the Existence of Sound in Deep Space</h2><p>So under what conditions and circumstances can <em>sound,</em> in its broader definition, exist in Deep Space? I have listed a few possibilities below.</p><h3>Dense Clouds of Gas and Dust</h3><p>In regions of space where matter coalesces into dense clouds of gas and dust, such as nebulae or the accretion disks surrounding black holes, the conditions can support the propagation of vibrations similar to sound waves.&nbsp;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2BIM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe925f69b-55ba-49f8-97be-8232ebd3dedd_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2BIM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe925f69b-55ba-49f8-97be-8232ebd3dedd_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!2BIM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe925f69b-55ba-49f8-97be-8232ebd3dedd_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!2BIM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe925f69b-55ba-49f8-97be-8232ebd3dedd_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!2BIM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe925f69b-55ba-49f8-97be-8232ebd3dedd_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2BIM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe925f69b-55ba-49f8-97be-8232ebd3dedd_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e925f69b-55ba-49f8-97be-8232ebd3dedd_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2BIM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe925f69b-55ba-49f8-97be-8232ebd3dedd_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!2BIM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe925f69b-55ba-49f8-97be-8232ebd3dedd_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!2BIM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe925f69b-55ba-49f8-97be-8232ebd3dedd_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!2BIM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe925f69b-55ba-49f8-97be-8232ebd3dedd_1000x1000.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Artwork - Sound Travelling through Cosmic Clouds</figcaption></figure></div><p>These clouds, though significantly less dense than any medium on Earth, possess enough particles to facilitate the transfer of energy through vibrational waves. The process is similar to sound wave propagation but occurs on a scale and at densities that defy Earth-bound analogies. Within these dense cosmic structures, shock waves&#8212;resulting from stellar explosions or the gravitational pull of massive objects&#8212;can ripple through the medium, demonstrating that sound, in a modified form, can indeed traverse the otherwise silent void of space.</p><h3>Black Holes and Gravitational Waves</h3><p>The extreme gravitational fields of black holes and the resultant phenomena they induce, such as gravitational waves, create another possibility of sound in space.&nbsp;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!A48c!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcef512-65e2-467f-bba1-cedd7be1f3e2_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!A48c!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcef512-65e2-467f-bba1-cedd7be1f3e2_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!A48c!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcef512-65e2-467f-bba1-cedd7be1f3e2_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!A48c!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcef512-65e2-467f-bba1-cedd7be1f3e2_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!A48c!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcef512-65e2-467f-bba1-cedd7be1f3e2_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!A48c!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcef512-65e2-467f-bba1-cedd7be1f3e2_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3fcef512-65e2-467f-bba1-cedd7be1f3e2_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!A48c!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcef512-65e2-467f-bba1-cedd7be1f3e2_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!A48c!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcef512-65e2-467f-bba1-cedd7be1f3e2_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!A48c!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcef512-65e2-467f-bba1-cedd7be1f3e2_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!A48c!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcef512-65e2-467f-bba1-cedd7be1f3e2_1000x1000.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Artwork - Sound Travelling through Gravitational Waves</figcaption></figure></div><p>Gravitational waves are ripples in the fabric of spacetime caused by the acceleration of massive objects. While not sound waves in the traditional sense, they share similarities in wave behaviour, propagating through space much like sound waves through a medium. When these waves pass through Earth, they subtly alter the distances between objects, an effect detectable with precise instruments. Thus, while gravitational waves do not constitute sound per se, their propagation mirrors the fundamental aspect of sound waves&#8212;transferring energy through a medium, in this case, the medium of spacetime itself.</p><h3>Plasma Waves</h3><p>Space is filled with plasma, a state of matter composed of free electrons and ions, prevalent in the solar wind and the magnetospheres of planets. Plasma waves, generated by the interaction of solar wind with planetary magnetic fields or by other energetic events in space, are yet another form of sound-like phenomena.&nbsp;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!j7QD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd11d9db8-97b4-44ab-9a34-da93a3ccd3a7_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!j7QD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd11d9db8-97b4-44ab-9a34-da93a3ccd3a7_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!j7QD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd11d9db8-97b4-44ab-9a34-da93a3ccd3a7_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!j7QD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd11d9db8-97b4-44ab-9a34-da93a3ccd3a7_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!j7QD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd11d9db8-97b4-44ab-9a34-da93a3ccd3a7_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!j7QD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd11d9db8-97b4-44ab-9a34-da93a3ccd3a7_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d11d9db8-97b4-44ab-9a34-da93a3ccd3a7_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!j7QD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd11d9db8-97b4-44ab-9a34-da93a3ccd3a7_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!j7QD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd11d9db8-97b4-44ab-9a34-da93a3ccd3a7_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!j7QD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd11d9db8-97b4-44ab-9a34-da93a3ccd3a7_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!j7QD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd11d9db8-97b4-44ab-9a34-da93a3ccd3a7_1000x1000.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Artwork - Sound Travelling through Plasma Waves</figcaption></figure></div><p>These waves can oscillate at audible frequencies but require translation by instruments into sound waves that humans can hear. For instance, spacecraft equipped with appropriate sensors can detect these plasma waves, converting their frequencies into audible sound. This process has allowed scientists to <em>listen</em> to the interactions between solar wind and Earth's magnetic field, providing insights into space weather phenomena.</p><p>The existence of these conditions and phenomena in deep space show that while traditional sound waves dependent on atmospheric or liquid mediums cannot propagate in the vacuum of space, other forms of sound-like energy transfer do occur.&nbsp;</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/is-deep-space-silent?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/is-deep-space-silent?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2>Some Book Recommendations</h2><p>This whole idea of sound in Deep Space is quite fascinating to consider. If you&#8217;d like to venture deeper into this, there are a few books you might want to read.</p><p>Brian Clegg's <em>"Gravitational Waves: How Einstein&#8217;s spacetime ripples reveal the secrets of the universe"</em> is quite interesting. This book is not just about the waves themselves but how they open a whole new chapter in understanding the cosmos. It is about trying to decipher the origins of the universe by studying the phenomena of cosmic waves in space.</p><p>Frank Verheest's <em>"Waves in Dusty Space Plasmas"</em> is another book to consider reading. Be warned though, it's a bit of a heavy lift being more on the technical side of things. Not exactly your morning read with coffee (or maybe it is, definitely not for me though). But the good thing about this deep dive is that it guides you through the plasma waves and dusty corners of space, revealing a world where sound takes on a whole new meaning.</p><p>And for the music lovers, Jamie James' <em>&#8220;The Music of the Spheres"</em> is an excellent read. It is a blend of science, philosophy, and history combined with music. It's all about how space time harmonises itself (with a musical connotation). James brings to life the ancient idea that everything in the cosmos, from the orbits of planets to the vibrations of stars, contributes to a universal melody.</p><p>These are my recommendations! If you enjoy reading any of these (or have read any of them already), drop a comment to let me know your thoughts.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/1Ns9DDYrfXLu5cpoQXiL1uy7FM7Lf_z9O/view?usp=sharing&quot;,&quot;text&quot;:&quot;Download Article as PDF&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/1Ns9DDYrfXLu5cpoQXiL1uy7FM7Lf_z9O/view?usp=sharing"><span>Download Article as PDF</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arkinfo Notes! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/is-deep-space-silent/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/is-deep-space-silent/comments"><span>Leave a comment</span></a></p>]]></content:encoded></item><item><title><![CDATA[On the Importance of Writing, Art, Creativity & Polymathy in Engineering]]></title><description><![CDATA[A Special Edition for Arkinfo Notes&#8217; First Birthday!]]></description><link>https://notes.arkinfo.xyz/p/on-the-importance-of-writing-art</link><guid isPermaLink="false">https://notes.arkinfo.xyz/p/on-the-importance-of-writing-art</guid><dc:creator><![CDATA[Mir H. S. Quadri]]></dc:creator><pubDate>Mon, 01 Apr 2024 00:31:02 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Or9v!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9bbc2730-30c7-4032-ae1c-9a95e37571ee_1000x1000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Today, Arkinfo Notes turned 1!</strong> I'm filled with profound gratitude for this cherished portal that we developed together, sharing insights and fostering a community of inquisitive minds.</p><p>When I started this newsletter last year, it was out of desire to connect, share, and grow with you. Well, mission accomplished! This newsletter has now become the heartbeat of Arkinfo. Your unwavering support and engagement have propelled and enriched this platform with ideas that define us.</p><p>After having published 60 articles in the past 12 months, this milestone has given me an opportunity to look back at our journey together and to celebrate the essence of what makes engineering so captivating. It is a blend of precision and creativity, playing a foundational role in shaping the world, and, most importantly, uniting us across geographies and cultures.</p><p>So in this special edition of Arkinfo Notes, I would like to share with you five stories of remarkable figures from history. These innovators, though not always in the limelight, played a crucial role in defining the field of engineering. Through their stories, I would like to highlight the importance of 5 key, albeit under-looked principles that play a crucial role in Engineering, i.e., Writing/Documentation, Interdisciplinary Approach, Creativity/Polymathy, Art, and Sharing of Knowledge.</p><p>These principles resonate deeply with our own path at Arkinfo, where every challenge is an invitation to innovate, every setback a lesson in resilience, and every achievement a testament to the power of shared vision.&nbsp;</p><p>I would like to once again, <strong>thank you</strong> for being an integral part of this journey. Here's to many more years of exploration, innovation, and shared successes.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Or9v!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9bbc2730-30c7-4032-ae1c-9a95e37571ee_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Or9v!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9bbc2730-30c7-4032-ae1c-9a95e37571ee_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!Or9v!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9bbc2730-30c7-4032-ae1c-9a95e37571ee_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!Or9v!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9bbc2730-30c7-4032-ae1c-9a95e37571ee_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!Or9v!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9bbc2730-30c7-4032-ae1c-9a95e37571ee_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Or9v!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9bbc2730-30c7-4032-ae1c-9a95e37571ee_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9bbc2730-30c7-4032-ae1c-9a95e37571ee_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Or9v!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9bbc2730-30c7-4032-ae1c-9a95e37571ee_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!Or9v!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9bbc2730-30c7-4032-ae1c-9a95e37571ee_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!Or9v!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9bbc2730-30c7-4032-ae1c-9a95e37571ee_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!Or9v!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9bbc2730-30c7-4032-ae1c-9a95e37571ee_1000x1000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Artwork - Bridging Worlds with Engineering</figcaption></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/subscribe?"><span>Subscribe now</span></a></p><h2>Al Jazari and the Importance of Writing</h2><p>Al Jazari was an engineer and inventor of the Islamic Golden Age, hailing from the Diyarbak&#305;r region in present-day Turkey. Born in 1136, he served as the chief engineer at the Artuqid palace, where he devised many mechanical devices. He is revered for his contributions to the documentation of engineering practices. His most notable work, <em>"The Book of Knowledge of Ingenious Mechanical Devices,"</em> was completed in 1206 and remains an important manuscript in the history of engineering. Next week, I shall be dedicating our monday issue to Al Jazari and his works, so if you haven&#8217;t already, subscribe to get notified.&nbsp;</p><p>Al Jazari's work was groundbreaking not only for its innovative nature but also for its meticulous documentation. His book details the construction and mechanisms of over fifty devices, ranging from clocks to water-raising machines, complete with illustrations and diagrams. This comprehensive approach ensured that his knowledge could be preserved and disseminated, laying foundational principles for modern engineering documentation.</p><p>Al Jazari&#8217;s approach highlights the critical role of clear, detailed writing and documentation in engineering. His work shows the fact that innovation doesn't exist in a vacuum and that the progress of engineering disciplines largely depends on the ability to share and build upon past knowledge effectively. Documentation ensures that innovative ideas are accessible, understandable, and improvable by future generations.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rW1p!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F447ea9cb-eb7a-4288-93f4-0e6b9ac32c7a_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rW1p!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F447ea9cb-eb7a-4288-93f4-0e6b9ac32c7a_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!rW1p!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F447ea9cb-eb7a-4288-93f4-0e6b9ac32c7a_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!rW1p!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F447ea9cb-eb7a-4288-93f4-0e6b9ac32c7a_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!rW1p!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F447ea9cb-eb7a-4288-93f4-0e6b9ac32c7a_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rW1p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F447ea9cb-eb7a-4288-93f4-0e6b9ac32c7a_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/447ea9cb-eb7a-4288-93f4-0e6b9ac32c7a_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rW1p!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F447ea9cb-eb7a-4288-93f4-0e6b9ac32c7a_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!rW1p!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F447ea9cb-eb7a-4288-93f4-0e6b9ac32c7a_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!rW1p!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F447ea9cb-eb7a-4288-93f4-0e6b9ac32c7a_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!rW1p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F447ea9cb-eb7a-4288-93f4-0e6b9ac32c7a_1000x1000.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Artwork - A Portrait of Al Jazari (This image is generated using AI and does not represent the real-life likeness of the figure)</figcaption></figure></div><p>With Arkinfo Notes, I have tried to embody this spirit of meticulous documentation and knowledge sharing. In the fast-evolving field of software engineering, keeping an accurate and accessible record of innovations, challenges, and solutions is vital.&nbsp;</p><p>I've aimed to not only share the technical aspects of engineering projects but also the thought processes, challenges, and creative solutions that emerged along the way. This practice has been instrumental in creating a shared knowledge base that supports collaboration, encourages innovation, and fosters a culture of continuous learning within and beyond Arkinfo.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/on-the-importance-of-writing-art?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/on-the-importance-of-writing-art?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2>Ibn al-Haytham and the Importance of an Interdisciplinary Approach</h2><p>Ibn al-Haytham, also known by the Latinized name <em>Alhazen,</em> was a pioneering polymath who lived from circa 965 to 1040. Hailing from Basra in present-day Iraq, he made significant contributions across various fields including mathematics, astronomy, and optics. His most renowned work, <em>"Kitab al-Manazir" (Book of Optics),</em> profoundly influenced the understanding of vision, light, and optical phenomena for centuries.</p><p>Ibn al-Haytham's interdisciplinary approach to science and his methodological contributions to experimentation are among his most lasting legacies. By integrating mathematical rigour with physical experimentation, he laid the groundwork for the modern scientific method. His work in optics, including the <em>principles of reflection and refraction,</em> not only advanced the field of physics but also had practical implications in engineering, influencing the development of lenses, mirrors, and even architectural designs to optimise light.</p><p>His ability to traverse disciplines and apply scientific principles to solve practical problems illustrates the value of a multidisciplinary approach in driving technological progress.&nbsp;</p><p>Inspired by Ibn al-Haytham's example, Arkinfo Notes has been a platform for exploring the intersectionality of engineering with other domains such as design, art, and even philosophy. Recognising that innovation often springs from the confluence of diverse perspectives, I have always made it a priority to discuss and analyse how principles from various fields can inform and enhance engineering projects.&nbsp;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kkOv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56a00029-bdc9-4b6f-b7eb-00eb0d0182c6_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kkOv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56a00029-bdc9-4b6f-b7eb-00eb0d0182c6_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!kkOv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56a00029-bdc9-4b6f-b7eb-00eb0d0182c6_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!kkOv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56a00029-bdc9-4b6f-b7eb-00eb0d0182c6_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!kkOv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56a00029-bdc9-4b6f-b7eb-00eb0d0182c6_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kkOv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56a00029-bdc9-4b6f-b7eb-00eb0d0182c6_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/56a00029-bdc9-4b6f-b7eb-00eb0d0182c6_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kkOv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56a00029-bdc9-4b6f-b7eb-00eb0d0182c6_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!kkOv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56a00029-bdc9-4b6f-b7eb-00eb0d0182c6_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!kkOv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56a00029-bdc9-4b6f-b7eb-00eb0d0182c6_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!kkOv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F56a00029-bdc9-4b6f-b7eb-00eb0d0182c6_1000x1000.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Artwork - A Portrait of Ibn al-Haytham (This image is generated using AI and does not represent the real-life likeness of the figure)</figcaption></figure></div><p>By highlighting projects and ideas that embody this multidisciplinary integration, Arkinfo Notes has helped me (and hopefully my readers as well) achieve a broader understanding and appreciation of the interconnected nature of knowledge.</p><h2>Al-Khazini and the Importance of Creativity &amp; Polymathy</h2><p>Al-Khazini, who flourished in the 12th century, was a polymath of the Islamic Golden Age, credited with significant contributions in the fields of physics, astronomy, and engineering. Though his exact origins are debated, he is believed to have been of either Uighur or Persian descent, working in the region that covers contemporary Iran and Turkmenistan. His most renowned work, <em>"The Book of the Balance of Wisdom,"</em> is a comprehensive treatise on the science of weights and <em>the principles of balance.</em></p><p>Creativity, as illustrated by Al-Khazini, is indispensable in engineering, enabling the leap from theoretical science to practical application. His work underscores the role of innovative thinking in solving complex problems and developing new technologies.&nbsp;</p><p>In modern engineering challenges, where solutions are not always apparent within the confines of existing knowledge, Al-Khazini's approach reminds us of the value of creative exploration beyond traditional boundaries. Inspired by Al-Khazini's legacy, at Arkinfo Notes, we have embraced <em>creativit</em>y as a <em>core principle</em> in discussing engineering solutions.&nbsp;</p><p>In our projects at Arkinfo, we often encounter challenges that cannot be addressed through conventional methods alone. Drawing on Al-Khazini&#8217;s example, we've sought to cultivate an environment where creative thinking is encouraged, allowing us to devise innovative solutions that transcend standard approaches.&nbsp;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qMGq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd79ef2a6-165e-457c-9a43-95dbe8b68bb8_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qMGq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd79ef2a6-165e-457c-9a43-95dbe8b68bb8_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!qMGq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd79ef2a6-165e-457c-9a43-95dbe8b68bb8_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!qMGq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd79ef2a6-165e-457c-9a43-95dbe8b68bb8_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!qMGq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd79ef2a6-165e-457c-9a43-95dbe8b68bb8_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qMGq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd79ef2a6-165e-457c-9a43-95dbe8b68bb8_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d79ef2a6-165e-457c-9a43-95dbe8b68bb8_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qMGq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd79ef2a6-165e-457c-9a43-95dbe8b68bb8_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!qMGq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd79ef2a6-165e-457c-9a43-95dbe8b68bb8_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!qMGq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd79ef2a6-165e-457c-9a43-95dbe8b68bb8_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!qMGq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd79ef2a6-165e-457c-9a43-95dbe8b68bb8_1000x1000.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Artwork - A Portrait of Al Khazini (This image is generated using AI and does not represent the real-life likeness of the figure)on...</figcaption></figure></div><p>Al-Khazini's inventive approach to the science of mechanics, particularly his work on <em>hydrostatic balances,</em> demonstrated an exceptional blend of creativity and analytical thinking. His development of the concept of <em>specific gravity</em> and its application in determining the composition of alloys had profound implications for materials science, a field crucial to engineering. By applying theoretical insights to practical instruments, Al-Khazini exemplified how creative thinking can lead to technological advancements that are both innovative and applicable in the real world.</p><p>Through Arkinfo Notes, I share these experiences, highlighting how creativity is harnessed to push the boundaries of what is technically possible. This not only reflects our commitment to innovation but also serves to inspire my readers to think creatively in their endeavours, fostering a community where the imaginative application of engineering principles leads to groundbreaking advancements.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/subscribe?"><span>Subscribe now</span></a></p><h2>Ahmad Lahori and the Importance of Art</h2><p>Ustad Ahmad Lahori was a prominent architect in the 17th century, widely regarded as the principal designer of the <em>Taj Mahal,</em> one of the most iconic symbols of Indian architecture and a masterpiece of Mughal engineering.&nbsp;</p><p>Though details about his life are sparse, historical records suggest he was of Persian descent and served as the chief architect under the Mughal Emperor Shah Jahan. His work on the Taj Mahal, alongside other architectural feats, are a remarkable fusion of engineering precision and artistic beauty.</p><p>Ahmad Lahori's architectural genius lies not just in the aesthetic grandeur of his creations but also in the innovative engineering techniques he employed. The Taj Mahal, for example, is celebrated not only for its unparalleled beauty but also for its structural ingenuity, including the complex interplay of light and shadow, the use of symmetry, and the incorporation of elements designed to withstand the test of time. Lahori&#8217;s work exemplifies how art and engineering can come together to create structures that are both functional and emotionally resonant.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!e0I-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F606d2261-0864-4c06-a0b0-c9245a8b82c9_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!e0I-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F606d2261-0864-4c06-a0b0-c9245a8b82c9_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!e0I-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F606d2261-0864-4c06-a0b0-c9245a8b82c9_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!e0I-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F606d2261-0864-4c06-a0b0-c9245a8b82c9_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!e0I-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F606d2261-0864-4c06-a0b0-c9245a8b82c9_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!e0I-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F606d2261-0864-4c06-a0b0-c9245a8b82c9_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/606d2261-0864-4c06-a0b0-c9245a8b82c9_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!e0I-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F606d2261-0864-4c06-a0b0-c9245a8b82c9_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!e0I-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F606d2261-0864-4c06-a0b0-c9245a8b82c9_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!e0I-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F606d2261-0864-4c06-a0b0-c9245a8b82c9_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!e0I-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F606d2261-0864-4c06-a0b0-c9245a8b82c9_1000x1000.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Artwork - A Portrait of Ustad Ahmad Lahori (This image is generated using AI and does not represent the real-life likeness of the figure)</figcaption></figure></div><p>This approach not only enhances the usability and appeal of engineered products but also inspires solutions that are more holistic and human-centred.</p><p>At Arkinfo Notes I have actively explored the intersection of art and engineering in our discussions. In our pursuit of innovation, it is important that we recognise that engineering solutions are not merely about solving problems but also about touching lives and invoking wonder.&nbsp;</p><h2>Sameera Moussa and the Importance of Sharing Knowledge</h2><p>Sameera Moussa, born in Egypt in 1917, was a groundbreaking nuclear physicist who dedicated her career to making medical nuclear technology accessible to everyone. Moussa held a doctorate in atomic radiation and was the first woman to hold a university post as a lecturer in her field in Egypt. She was a visionary who believed in the potential of nuclear technology to revolutionise medicine and sought to demystify its benefits for the greater good. Her efforts led her to organise the <em>"Atoms for Peace"</em> conference and advocate for making nuclear medicine affordable to all, regardless of geographical or economic barriers.</p><p>Dr. Moussa's work in nuclear physics and her advocacy for accessible healthcare technology show the impact that engineers and scientists can have on society. Her research on breaking down atoms of cheap materials to produce the same effect as Uranium was groundbreaking. This not only shows the potential for significant advancements in medical treatment but also demonstrates a commitment to ethical science and technology. Moussa&#8217;s life was tragically cut short in an accident in 1952, yet her legacy of using engineering to serve humanity continues to inspire.</p><p>Sameera Moussa's belief in the democratisation of scientific knowledge is a critical aspect of engineering and technological development. Her vision represents a broader ethos that the true value of scientific discovery lies in its ability to benefit all of humanity, not just a select few. This principle is especially relevant today as we navigate the challenges of making advanced technologies equitable and widely available.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cZyD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf42205c-0e25-4319-9a9b-e37604788184_1000x1000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cZyD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf42205c-0e25-4319-9a9b-e37604788184_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!cZyD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf42205c-0e25-4319-9a9b-e37604788184_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!cZyD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf42205c-0e25-4319-9a9b-e37604788184_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!cZyD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf42205c-0e25-4319-9a9b-e37604788184_1000x1000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cZyD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf42205c-0e25-4319-9a9b-e37604788184_1000x1000.png" width="1000" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bf42205c-0e25-4319-9a9b-e37604788184_1000x1000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cZyD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf42205c-0e25-4319-9a9b-e37604788184_1000x1000.png 424w, https://substackcdn.com/image/fetch/$s_!cZyD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf42205c-0e25-4319-9a9b-e37604788184_1000x1000.png 848w, https://substackcdn.com/image/fetch/$s_!cZyD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf42205c-0e25-4319-9a9b-e37604788184_1000x1000.png 1272w, https://substackcdn.com/image/fetch/$s_!cZyD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbf42205c-0e25-4319-9a9b-e37604788184_1000x1000.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Artwork - A Portrait of Dr. Sameera Moussa (This image is generated using AI and does not represent the real-life likeness of the figure)</figcaption></figure></div><p>Arkinfo Notes has been committed to demystifying complex engineering and technological concepts for a broader audience at no cost. Through our newsletter, we aim to share insights into our work and industry trends, with a focus on their societal impacts, ensuring that the benefits of technological advancements are widely understood and accessible.&nbsp;</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/on-the-importance-of-writing-art?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/on-the-importance-of-writing-art?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2>What&#8217;s Ahead</h2><p>Al Jazari, Ibn al-Haytham, Al-Khazini, Ustad Ahmad Lahori, and Sameera Moussa were remarkable individuals from diverse backgrounds and epochs, exemplifying the enduring spirit of curiosity, creativity, and commitment to the betterment of humanity that lies at the heart of engineering.&nbsp;</p><p>At Arkinfo, and through Arkinfo Notes, we've endeavoured to embody these principles, fostering a community where ideas flourish, innovation thrives, and knowledge is a bridge, not a barrier.&nbsp;</p><p>Your curiosity, engagement, and support have been the wind beneath our wings, propelling us forward, encouraging us to dream bigger, and strive for a future where technology and humanity walk hand in hand.</p><p>Looking ahead, I am excited for what the future holds. With each challenge we face, we find new opportunities to innovate, to learn, and to grow. Together, let's continue to explore the uncharted, to question the unquestioned, and to share the wealth of our discoveries. The journey of Arkinfo Notes is not just about chronicling the past but about shaping the future together.</p><p>Thank you for being a part of this adventure. Here's to another year of exploration, innovation, and shared success. Let us venture forth with the same spirit of inquiry and openness that has brought us this far, ready to write the next chapter in our collective story.</p><div><hr></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://drive.google.com/file/d/1-RJkUKRZR10QfkQ1x0BikV2iKjnJONTs/view?usp=sharing&quot;,&quot;text&quot;:&quot;Download Article as PDF&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://drive.google.com/file/d/1-RJkUKRZR10QfkQ1x0BikV2iKjnJONTs/view?usp=sharing"><span>Download Article as PDF</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Arkinfo Notes! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://notes.arkinfo.xyz/p/on-the-importance-of-writing-art/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://notes.arkinfo.xyz/p/on-the-importance-of-writing-art/comments"><span>Leave a comment</span></a></p>]]></content:encoded></item></channel></rss>