<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Not So Basic!]]></title><description><![CDATA[Not So Basic!]]></description><link>https://blog.sanathswaroop.com</link><generator>RSS for Node</generator><lastBuildDate>Wed, 15 Apr 2026 16:15:19 GMT</lastBuildDate><atom:link href="https://blog.sanathswaroop.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Building ANTM.ai | Turn Your Knowledge into Intelligence]]></title><description><![CDATA[Your ideas, conversations, and discoveries shouldn't be scattered—they should form a coherent, evolving memory. That’s why we’ve built Brain 2.0, ANTM.ai’s reimagined knowledge platform, designed to help you capture, connect, and recall exactly what ...]]></description><link>https://blog.sanathswaroop.com/building-antm-ai-brain-2</link><guid isPermaLink="true">https://blog.sanathswaroop.com/building-antm-ai-brain-2</guid><category><![CDATA[llm]]></category><category><![CDATA[AI]]></category><category><![CDATA[chatgpt]]></category><category><![CDATA[LLaMa]]></category><category><![CDATA[gemini]]></category><category><![CDATA[cursor]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[claude.ai]]></category><category><![CDATA[PostgreSQL]]></category><category><![CDATA[MongoDB]]></category><category><![CDATA[React]]></category><category><![CDATA[Tailwind CSS]]></category><category><![CDATA[monorepo]]></category><category><![CDATA[Docker]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Sanath Swaroop Mulky]]></dc:creator><pubDate>Fri, 11 Jul 2025 07:00:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1751706479024/e631b25e-7b12-473a-bd48-f7123f6f822c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Your ideas, conversations, and discoveries shouldn't be scattered—they should form a coherent, evolving memory. That’s why we’ve built <strong>Brain 2.0</strong>, <a target="_blank" href="http://ANTM.ai">ANTM.ai</a>’s reimagined knowledge platform, designed to help you <strong>capture</strong>, <strong>connect</strong>, and <strong>recall</strong> exactly what matters—immediately.</p>
<hr />
<h3 id="heading-from-cantstudyhttpcantstudy-notesyaihttpnotesyai-antmaihttpantmai">From <a target="_blank" href="http://CANT.study">CANT.study</a> → <a target="_blank" href="http://Notesy.ai">Notesy.ai</a> → <a target="_blank" href="http://ANTM.ai">ANTM.ai</a></h3>
<p>Our journey began with <a target="_blank" href="http://CANT.study"><strong>CANT.study</strong></a>, a context-aware lecture transcription tool—born during “Build Night” while working live in public. From there, we experimented with broader note-taking concepts under <a target="_blank" href="http://Notesy.ai"><strong>Notesy.ai</strong></a>, capturing multi-source information. Now, we’re proud to present <a target="_blank" href="http://ANTM.ai"><strong>ANTM.ai</strong></a> <strong>Brain 2.0</strong>: a full-featured, AI-powered memory graph built on those principles.</p>
<hr />
<h3 id="heading-the-vision-behind-brain-20">The Vision Behind Brain 2.0</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751706513810/dac8c87e-ae23-42e0-a037-22b79cba009e.png" alt class="image--center mx-auto" /></p>
<p>Tools today let you store text, audio, and links—but few turn that data into connections. We saw an opportunity:</p>
<ul>
<li><p>Capture from anywhere—browser, email, voice, chat.</p>
</li>
<li><p>Use a knowledge graph to represent ideas as interconnected nodes and edges.</p>
</li>
<li><p>Continuously enhance your “brain” by discovering new patterns and suggestions.</p>
</li>
</ul>
<hr />
<h3 id="heading-how-brain-20-works">How Brain 2.0 Works</h3>
<ol>
<li><p><strong>Capture Everywhere</strong></p>
<ul>
<li><p>Web clipper for articles</p>
</li>
<li><p>Forwarded emails and chats</p>
</li>
<li><p>Voice and meeting transcription</p>
</li>
</ul>
</li>
<li><p><strong>AI Processing</strong></p>
<ul>
<li><p>Transcribe and embed content</p>
</li>
<li><p>Auto-tag items and identify entities</p>
</li>
<li><p>Establish connections automatically, creating a living graph</p>
</li>
</ul>
</li>
<li><p><strong>Discover Insights</strong></p>
<ul>
<li><p>Ask questions in natural language</p>
</li>
<li><p>Navigate your dynamic graph visually</p>
</li>
<li><p>Receive context-aware reminders and alerts</p>
</li>
</ul>
</li>
</ol>
<hr />
<h3 id="heading-core-features">Core Features</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature</td><td>Capability</td></tr>
</thead>
<tbody>
<tr>
<td>Web &amp; Email Clip</td><td>One-click saving of articles and communications</td></tr>
<tr>
<td>Voice Memo Transcription</td><td>Speech-to-text capture and indexing</td></tr>
<tr>
<td>Auto-Tagging &amp; Node Creation</td><td>AI categorizes and builds connections</td></tr>
<tr>
<td>Interactive Memory Graph</td><td>Live graph view with detailed nodes and edges</td></tr>
<tr>
<td>Intelligent Search</td><td>Semantic queries like "what connects my Tokyo trip and project notes?"</td></tr>
<tr>
<td>Smart Reminders</td><td>Contextual suggestions based on your graph activity</td></tr>
<tr>
<td>Team Sharing &amp; Insights</td><td>Shared spaces with team-level connections and analytics</td></tr>
</tbody>
</table>
</div><hr />
<h3 id="heading-technical-highlights">Technical Highlights</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751706616735/c9755c13-d09e-4062-8f6c-455e2f3a97e2.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p><strong>Graph Memory Engine</strong>: Real-time knowledge graph model enabling temporal and context-rich connections.</p>
</li>
<li><p><strong>Multi-modal Inputs</strong>: Integrates text, voice, and rich content into unified processing pipelines.</p>
</li>
<li><p><strong>Graph-RAG Combiner</strong>: Supports multi-hop reasoning over the graph for nuanced query responses.</p>
</li>
</ul>
<hr />
<h3 id="heading-whats-ahead">What’s Ahead</h3>
<ol>
<li><p><strong>Open Beta</strong>: Launching with web clipper, email/voice capture, and basic graph recall.</p>
</li>
<li><p><strong>Graph Intelligence</strong>: Introducing auto-edge creation and logical connection inference.</p>
</li>
<li><p><strong>Collaborative Spaces</strong>: Share graphs with teams and surface collective insights.</p>
</li>
<li><p><strong>Agent Automations</strong>: Scheduled summaries, relationship alerts, and reminder agents.</p>
</li>
</ol>
<hr />
<h3 id="heading-why-it-matters">Why It Matters</h3>
<ul>
<li><p>Transforms static notes into a reflective, evolving brain—mirroring how we actually think.</p>
</li>
<li><p>Builds on lessons from <a target="_blank" href="https://CANT.study">CANT.study</a> and Notesy.ai, refined through Build Night’s transparent, feedback-driven development.</p>
</li>
<li><p>Offers intelligent recall and insight, so you can focus on thinking—not remembering.</p>
</li>
</ul>
<hr />
<p>Brain 2.0 through <a target="_blank" href="http://ANTM.ai">ANTM.ai</a> is the culmination of our public journey—from lecture transcription to a full-fledged personal knowledge assistant. <a target="_blank" href="http://antm.ai/"><strong>Get early access</strong></a> to shape this next generation of memory-first tools.</p>
]]></content:encoded></item><item><title><![CDATA[c0py.me 2.0 Is Here for Files! — Private • P2P • Instant]]></title><description><![CDATA[What Is c0py.me 2.0?

Gone are the days of a single textarea for quick text snippets on your local Wi-Fi. With c0py.me 2.0, we’ve graduated to a full-fledged, server-less sharing platform—secure, anonymous, and lightning fast. Whether it’s code, imag...]]></description><link>https://blog.sanathswaroop.com/c0py-me-2-0-private-p2p-instant</link><guid isPermaLink="true">https://blog.sanathswaroop.com/c0py-me-2-0-private-p2p-instant</guid><category><![CDATA[files]]></category><category><![CDATA[Share]]></category><category><![CDATA[p2p]]></category><category><![CDATA[WebRTC]]></category><category><![CDATA[SocketIO]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[React]]></category><category><![CDATA[cursor]]></category><dc:creator><![CDATA[Sanath Swaroop Mulky]]></dc:creator><pubDate>Sat, 05 Jul 2025 21:23:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1751750630530/06d916ec-d949-428d-b87e-2aae03c6b1ea.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-what-is-c0pymehttpsc0pyme-20">What Is <a target="_blank" href="https://c0py.me/">c0py.me</a> 2.0?</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751749891786/5d0c3ef3-df8f-48c1-b010-e3c50ca93252.png" alt class="image--center mx-auto" /></p>
<p>Gone are the days of a single textarea for quick text snippets on your local Wi-Fi. With c0py.me 2.0, we’ve graduated to a full-fledged, server-less sharing platform—secure, anonymous, and lightning fast. Whether it’s code, images, documents, or voice notes, you can now share any payload directly between devices using end-to-end encrypted P2P connections.</p>
<hr />
<h2 id="heading-whats-changed">What’s Changed</h2>
<ol>
<li><p><strong>Brand Pillars</strong>:</p>
<ul>
<li><p><strong>Private</strong>: Zero servers. Zero traces.</p>
</li>
<li><p><strong>P2P</strong>: WebRTC mesh delivers direct device-to-device transfers.</p>
</li>
<li><p><strong>Instant</strong>: Real-time sharing with live progress feedback.</p>
</li>
</ul>
</li>
<li><p><strong>Expanded Scope</strong>:</p>
<ul>
<li><p>From plain text → full files, folders, and streams.</p>
</li>
<li><p>From single “Wi-Fi room” → three distinct “worlds” for every use case.</p>
</li>
</ul>
</li>
<li><p><strong>Delightful UX</strong>:</p>
<ul>
<li><p><strong>World Selector</strong> (“Jungle,” “Room,” “Family”) up top.</p>
</li>
<li><p><strong>Animated Cub Mascot</strong> showing live speed &amp; ETA.</p>
</li>
<li><p><strong>Connection Authorization</strong> modal for secure approvals.</p>
</li>
</ul>
</li>
<li><p><strong>Open-Source Commitment</strong>:</p>
<ul>
<li>MIT-licensed on GitHub, welcoming community plugins and contributions.</li>
</ul>
</li>
</ol>
<hr />
<h2 id="heading-how-it-works">How It Works</h2>
<ol>
<li><p><strong>Choose Your World</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751749924633/c73c2196-1212-4c37-8210-d3661a2ee100.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p><strong>Jungle</strong>: Public, global namespace.</p>
</li>
<li><p><strong>Room</strong>: Private, code-based invites.</p>
</li>
<li><p><strong>Family</strong>: Auto-discover peers on the same Wi-Fi.</p>
</li>
</ul>
</li>
<li><p><strong>Select &amp; Send</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751750017736/831790ca-d777-44df-a87e-22b3047ea913.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>Pick text snippets, files, folders, or media.</p>
</li>
<li><p>Click “Share” and our cub springs into action.</p>
</li>
</ul>
</li>
<li><p><strong>P2P Transfer</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751750046750/2b2c3981-f1fa-4a0c-a9e1-ce75f2820214.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>WebRTC datachannels carry your data.</p>
</li>
<li><p>End-to-end encryption ensures privacy.</p>
</li>
<li><p>Live progress and time estimates from the cub.</p>
</li>
</ul>
</li>
<li><p><strong>Transfer Complete</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751750118255/37845270-7dad-482d-9123-736817e62485.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<hr />
<h2 id="heading-whos-it-for">Who’s It For</h2>
<ul>
<li><p><strong>Hackathon Warriors</strong><br />  Instant code or asset sharing when every second counts.</p>
</li>
<li><p><strong>Remote &amp; Hybrid Teams</strong><br />  Collaborate securely without VPNs or cloud storage.</p>
</li>
<li><p><strong>Friends &amp; Family</strong><br />  Send photos, videos, or shopping lists—no installs, no accounts.</p>
</li>
<li><p><strong>Privacy-Minded Users</strong><br />  Keep your data off third-party servers and in your own hands.</p>
</li>
</ul>
<hr />
<h2 id="heading-try-it-in-5-seconds">Try It in 5 Seconds</h2>
<ol>
<li><p>Visit <strong>c0py.me</strong></p>
</li>
<li><p>Choose your <strong>World</strong></p>
</li>
<li><p>Paste or drop your payload</p>
</li>
<li><p>Click <strong>Share</strong></p>
</li>
<li><p>Watch it appear instantly on other devices</p>
</li>
</ol>
<p>No sign-ups. No servers. Just pure, peer-to-peer sharing.</p>
<hr />
<h2 id="heading-support-us-on-github">Support Us on GitHub</h2>
<p>c0py.me is fully open-source under MIT. ⭐️ Star the repo, submit issues, and contribute features:<br /><a target="_blank" href="https://github.com/sanath1599/c0py.me">github.com/sanath1599/c0py.me</a></p>
<p><a target="_blank" href="https://c0py.me/">Checkout of c0py.me and Start Sharing</a></p>
<p>Your feedback drives the next wave of features—join us in shaping the future of private, peer-to-peer sharing!</p>
]]></content:encoded></item><item><title><![CDATA[We’re Live on Product Hunt — Introducing c0py.me]]></title><description><![CDATA[Ever needed to quickly share a block of text with someone nearby—without sending links, making accounts, or dealing with cluttered interfaces?
Say hello to c0py.me — the fastest way to share text across devices on the same Wi-Fi network. No accounts....]]></description><link>https://blog.sanathswaroop.com/introducing-c0py-me</link><guid isPermaLink="true">https://blog.sanathswaroop.com/introducing-c0py-me</guid><category><![CDATA[sharing]]></category><category><![CDATA[pastebin]]></category><category><![CDATA[localstorage]]></category><category><![CDATA[networking]]></category><category><![CDATA[Copy]]></category><dc:creator><![CDATA[Sanath Swaroop Mulky]]></dc:creator><pubDate>Fri, 06 Jun 2025 00:36:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749170099474/0a13b68f-8338-4231-8730-057779a6f3d6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ever needed to quickly share a block of text with someone nearby—without sending links, making accounts, or dealing with cluttered interfaces?</p>
<p>Say hello to <a target="_blank" href="https://c0py.me"><strong>c0py.me</strong></a> — the fastest way to share text across devices on the same Wi-Fi network. No accounts. No custom links. Just paste and go.</p>
<hr />
<h2 id="heading-what-is-c0pyme">What Is c0py.me?</h2>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/PNmnOpZRI1s">https://youtu.be/PNmnOpZRI1s</a></div>
<p> </p>
<p>c0py.me is a local-first text-sharing tool. Open the site in your browser, paste what you want to share, hit <strong>“Share to Network,”</strong> and anyone else on the same Wi-Fi visiting c0py.me will instantly see it.</p>
<p>Perfect for hackathons, classrooms, or anywhere you’re working side-by-side and need instant collaboration.</p>
<h2 id="heading-key-features">Key Features</h2>
<ul>
<li><p><strong>No Sign-Ups or Installs</strong><br />  No login, no setup—just visit c0py.me and start sharing right away.</p>
</li>
<li><p><strong>Instant Peer Visibility</strong><br />  Share something and it pops up on everyone else's screen in real time. Great for code snippets, ideas, or tasks.</p>
</li>
<li><p><strong>Minimal, Focused UI</strong><br />  No clutter. Just a clean textarea, a “Share to Network” button, and a real-time feed of what’s been shared.</p>
</li>
<li><p><strong>Dark &amp; Light Mode</strong><br />  Automatically detects your system theme. Easy on the eyes day or night, with a manual toggle if you want to switch it up.</p>
</li>
</ul>
<hr />
<h2 id="heading-whos-it-for">Who’s It For?</h2>
<ul>
<li><p><strong>Hackathon Teams</strong><br />  When the clock’s ticking, and you just want to share that one line of code or JSON payload—c0py.me makes it effortless.</p>
</li>
<li><p><strong>Classrooms, Workplace &amp; Study Groups</strong><br />  Teachers can share example problems or notes to everyone’s device in seconds. Students can collaborate without needing shared docs or messy group chats.</p>
</li>
<li><p><strong>Home Networks</strong><br />  Need to send a grocery list or a quick message between family devices? Open c0py.me and everyone’s on the same page—literally.</p>
</li>
</ul>
<hr />
<h2 id="heading-try-it-in-5-seconds">Try It in 5 Seconds</h2>
<ol>
<li><p>Go to <a target="_blank" href="https://c0py.me"><strong>c0py.me</strong></a></p>
</li>
<li><p>Paste or type something</p>
</li>
<li><p>Click <strong>“Share to Network”</strong></p>
</li>
<li><p>Anyone else on the same Wi-Fi who opens c0py.me will see it instantly</p>
</li>
</ol>
<p>No copy-paste links. No refreshing. Just pure, instant sharing.</p>
<hr />
<h2 id="heading-whats-coming-next">What’s Coming Next</h2>
<p>We’re just getting started. Soon, you’ll be able to:</p>
<ul>
<li><p>Set auto-expiry times on snippets</p>
</li>
<li><p>Render Markdown and styled notes</p>
</li>
<li><p>View snippet history locally</p>
</li>
<li><p>And lots more tiny features to make text sharing even smoother</p>
</li>
</ul>
<hr />
<h2 id="heading-support-us-on-product-hunt">Support Us on Product Hunt!</h2>
<p>We’ve launched on Product Hunt and would love your support. If you’ve ever fumbled with pastebins, shared links that didn’t load, or just wanted something that <em>works instantly</em>, check us out and give us a shout.</p>
<p><a target="_blank" href="https://www.producthunt.com/products/c0py-me"><strong>Visit our Product Hunt page</strong></a> <a target="_blank" href="https://www.producthunt.com/products/c0py-me">https://www.producthunt.com/products/c0py-me</a></p>
]]></content:encoded></item><item><title><![CDATA[We Chose Human Intelligence Over AI — And It Worked Better Than We Hoped]]></title><description><![CDATA[At a recent AI for Good Hackathon, our team, Rooted in Resilience, set out to solve a real-world challenge brought to us by Refugee & Immigrant Transitions (RIT) — a nonprofit that supports immigrants and refugees through English language and job-rea...]]></description><link>https://blog.sanathswaroop.com/intelligence-supersedes-artificial-intelligence</link><guid isPermaLink="true">https://blog.sanathswaroop.com/intelligence-supersedes-artificial-intelligence</guid><category><![CDATA[AI]]></category><category><![CDATA[hackathon]]></category><category><![CDATA[opencv]]></category><category><![CDATA[chatgpt]]></category><category><![CDATA[claude.ai]]></category><category><![CDATA[#anthropic]]></category><category><![CDATA[gemini]]></category><category><![CDATA[Google]]></category><category><![CDATA[openai]]></category><category><![CDATA[Python]]></category><category><![CDATA[Intelligence]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[Vision]]></category><category><![CDATA[MERN Stack]]></category><dc:creator><![CDATA[Sanath Swaroop Mulky]]></dc:creator><pubDate>Wed, 04 Jun 2025 17:13:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749056287834/731a07e4-bc80-4f3a-9e43-f8e8559053ea.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>At a recent <em>AI for Good</em> Hackathon, our team, <em>Rooted in Resilience</em>, set out to solve a real-world challenge brought to us by <strong>Refugee &amp; Immigrant Transitions (RIT)</strong> — a nonprofit that supports immigrants and refugees through English language and job-readiness programs.</p>
<p>Their process for collecting feedback was painfully analog:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749057050515/aa40e404-2188-4486-a05f-499c69ec21a2.jpeg" alt class="image--center mx-auto" /></p>
<blockquote>
<p>Distribute printed surveys.<br />Students handwrite their responses.<br />Volunteers manually enter data into spreadsheets.<br />Occasionally, mistakes creep in.<br />Always, it takes too much time.</p>
</blockquote>
<p>So we asked the obvious question: <em>Can AI help?</em></p>
<h2 id="heading-why-ai-wasnt-the-best-answer">Why AI <em>Wasn’t</em> the Best Answer</h2>
<p>Like most teams at the hackathon, we started with modern LLMs — GPT-4, Claude, Gemini — to interpret scanned forms. The results?</p>
<ul>
<li><p>Text fields: surprisingly accurate.</p>
</li>
<li><p>Checkboxes: 40–60% failure rate, even with top-tier models.</p>
</li>
<li><p>Cost: way too high for a nonprofit running on limited funds.</p>
</li>
</ul>
<p>We realized quickly that no matter how sophisticated the AI, checkbox detection was too flaky — and definitely not budget-friendly.</p>
<hr />
<h2 id="heading-so-we-did-the-unthinkable">So We Did the Unthinkable:</h2>
<h3 id="heading-we-chose-to-be-intelligent-instead">We Chose to Be Intelligent Instead</h3>
<p>Instead of brute-forcing AI into a problem it wasn’t designed to solve, we <strong>designed around it</strong>.</p>
<p>We built <strong>SnapScan</strong> — a complete survey automation system that works <em>without</em> needing advanced AI for every input. Here's how it works:</p>
<hr />
<h2 id="heading-snapscan-our-solution">SnapScan: Our Solution</h2>
<h3 id="heading-1-custom-form-generator">1. <strong>Custom Form Generator</strong></h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749056891328/08309103-2e58-4eb6-ad7b-91692b5c6b5a.png" alt class="image--center mx-auto" /></p>
<p>We built a tool that lets volunteers recreate RIT’s paper forms — preserving the layout and structure, but also adding two major upgrades:</p>
<ul>
<li><p><strong>Exact box coordinate mapping</strong> (crucial for visual detection)</p>
</li>
<li><p><strong>A unique QR code</strong> to tag each form to the right spreadsheet</p>
</li>
</ul>
<h3 id="heading-2-scan-and-detect">2. <strong>Scan and Detect</strong></h3>
<p>Once forms are filled and scanned:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749056942569/34f95da4-542c-4227-ba33-e795cc66c23d.jpeg" alt class="image--center mx-auto" /></p>
<ul>
<li><p>The QR code tells the system which questions and spreadsheet to sync with</p>
</li>
<li><p>OpenCV checks for checkbox marks using pixel-perfect box mapping</p>
</li>
<li><p>OCR handles the occasional handwritten free-text fields</p>
</li>
</ul>
<h3 id="heading-3-auto-sync-to-google-sheets">3. <strong>Auto-Sync to Google Sheets</strong></h3>
<p>Every response is instantly logged into a shared spreadsheet — no sorting, no manual entry, no fuss.</p>
<hr />
<h2 id="heading-a-better-demo-than-ai">A Better Demo Than AI</h2>
<p>Here’s how we introduced it at the hackathon:</p>
<blockquote>
<p><em>"This is the form RIT currently uses — built in Word. Here’s what it looks like in our system. Same format, but now there’s a QR code."</em><br /><em>"Alberto — our fictional RIT volunteer — prints and distributes the forms. Once they’re filled out, he uploads all scanned forms to a Google Drive folder."</em><br /><em>"SnapScan reads the QR code, detects the checkboxes, extracts the handwriting, and updates the spreadsheet. All automatically."</em></p>
</blockquote>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=8ZoVevdKZik">https://www.youtube.com/watch?v=8ZoVevdKZik</a></div>
<p> </p>
<hr />
<h2 id="heading-why-it-worked">Why It Worked</h2>
<ul>
<li><p><strong>100% checkbox accuracy</strong> thanks to custom layout and OpenCV</p>
</li>
<li><p><strong>Zero per-form AI costs</strong> — a huge win for nonprofits</p>
</li>
<li><p><strong>Fully automated pipeline</strong> — from scan to sheet</p>
</li>
<li><p><strong>Volunteer-friendly</strong> — no training needed, no tech knowledge required</p>
</li>
</ul>
<hr />
<p>We didn’t win the hackathon.<br />We didn’t use the flashiest LLM stack.<br />We didn’t even qualify under the "AI requirement" strictly.</p>
<p>But we built something that <strong>actually works</strong> — accurately, affordably, and at scale — for a nonprofit that truly needed it.</p>
<p>And sometimes, that’s worth more than a trophy.</p>
<hr />
<h3 id="heading-the-team">The Team:</h3>
<ol>
<li><p>Sanath Swaroop Mulky</p>
</li>
<li><p>Aditi Dani</p>
</li>
<li><p>Manav Chandani</p>
</li>
<li><p>Prithvi Elancherran</p>
</li>
<li><p>Armin Foroughi</p>
</li>
</ol>
<hr />
<h3 id="heading-the-tech-stack">The Tech Stack:</h3>
<ul>
<li><p>OpenCV (checkbox detection)</p>
</li>
<li><p><strong><em>pytesseract</em></strong> OCR(text extraction)</p>
</li>
<li><p>React + Firebase (frontend &amp; auth)</p>
</li>
<li><p>Google Drive + Sheets API (storage &amp; sync)</p>
</li>
<li><p>Node.js (backend)</p>
</li>
</ul>
<hr />
<h3 id="heading-want-to-try-snapscan-or-contribute">Want to Try SnapScan or Contribute?</h3>
<p>We're exploring open-sourcing the tool for other nonprofits.<br />Leave a comment or DM if you're interested in collaborating or piloting it with your organization.</p>
]]></content:encoded></item><item><title><![CDATA[Big Update: CANT.study Now Supports YouTube Videos!]]></title><description><![CDATA[When we launched CANT.study, our goal was simple: to help students capture and understand spoken content—lectures, audio notes, and recordings, more efficiently. Today, we’re excited to announce a major step forward in that mission.
CANT.study now su...]]></description><link>https://blog.sanathswaroop.com/antstudy-now-supports-youtube-videos</link><guid isPermaLink="true">https://blog.sanathswaroop.com/antstudy-now-supports-youtube-videos</guid><category><![CDATA[notes]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[youtube]]></category><category><![CDATA[youtube learning]]></category><category><![CDATA[note-taking]]></category><dc:creator><![CDATA[Sanath Swaroop Mulky]]></dc:creator><pubDate>Mon, 26 May 2025 07:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749171911988/3518e2be-aba8-4cfb-9738-e4f60af78747.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When we launched <a target="_blank" href="https://cant.study">CANT.study</a>, our goal was simple: to help students capture and understand spoken content—lectures, audio notes, and recordings, more efficiently. Today, we’re excited to announce a major step forward in that mission.</p>
<p><strong>CANT.study now supports YouTube video inputs.</strong></p>
<p>This update opens up a new world of learning possibilities by allowing students to extract meaningful notes, summaries, and transcripts from educational videos with zero manual effort.</p>
<hr />
<h2 id="heading-whats-new">What’s New</h2>
<h3 id="heading-youtube-link-support">YouTube Link Support</h3>
<p>Paste any public YouTube video URL into CANT.study, and our system will fetch the content and begin processing it just like a lecture recording.</p>
<h3 id="heading-automatic-transcription-and-summarization">Automatic Transcription and Summarization</h3>
<p>CANT.study transcribes the spoken content in the video, summarizes it into concise points, and presents it as an organized, study-friendly format.</p>
<h3 id="heading-search-across-video-content">Search Across Video Content</h3>
<p>Once processed, you can search the transcript and summary to find specific concepts, keywords, or explanations—without having to rewatch the video.</p>
<h3 id="heading-unified-note-management">Unified Note Management</h3>
<p>Whether you’re using recordings, audio uploads, or YouTube content, all notes live in one place. Organized, searchable, and ready whenever you need them.</p>
<hr />
<h2 id="heading-use-cases-this-unlocks">Use Cases This Unlocks</h2>
<ul>
<li><p><strong>YouTube-Based Study Sessions</strong><br />  Turn educational videos into study notes you can quickly revise, search, or export.</p>
</li>
<li><p><strong>Exam Prep From Online Resources</strong><br />  Summarize tutorial videos the night before an exam to focus on the important bits instead of rewatching everything.</p>
</li>
<li><p><strong>Accessibility</strong><br />  For students who prefer written content or need transcripts due to hearing impairments, YouTube videos now become equally usable learning material.</p>
</li>
<li><p><strong>Second Language Support</strong><br />  Non-native speakers can break down complex spoken content into simplified summaries, improving comprehension and retention.</p>
</li>
</ul>
<hr />
<h2 id="heading-how-to-use-it">How to Use It</h2>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/BfGLYYNhXx8">https://youtu.be/BfGLYYNhXx8</a></div>
<p> </p>
<ol>
<li><p>Visit <a target="_blank" href="https://cant.study">CANT.study</a></p>
</li>
<li><p>Paste a YouTube video link</p>
</li>
<li><p>Wait a few moments while we process the content</p>
</li>
<li><p>View the transcript, summary, key highlights, and timestamps</p>
</li>
<li><p>Save your notes, search through them, or export them as needed</p>
</li>
</ol>
<hr />
<h2 id="heading-whats-coming-next">What’s Coming Next</h2>
<p>We’re already working on the next set of enhancements, including:</p>
<ul>
<li><p>Support for YouTube playlists</p>
</li>
<li><p>Unlisted video support</p>
</li>
<li><p>PDF and Notion-style export</p>
</li>
<li><p>AI-generated flashcards from video content</p>
</li>
<li><p>Collaborative annotation features</p>
</li>
</ul>
<hr />
<h2 id="heading-join-the-journey">Join the Journey</h2>
<p>If you’ve been using YouTube as a learning tool, CANT.study now makes that process structured and efficient. No more pausing, rewinding, or manually jotting down notes.</p>
<p><a target="_blank" href="https://www.producthunt.com/products/cant">Visit us on Product Hunt</a>, try out the new feature, and let us know what you think. Your feedback drives what we build next.</p>
<p><a target="_blank" href="https://www.producthunt.com/products/cant">https://www.producthunt.com/products/cant</a></p>
]]></content:encoded></item><item><title><![CDATA[Introducing “Build Night!” — Kicking Off Our Public Development Series]]></title><description><![CDATA[Marching straight from launch week’s excitement, I’m thrilled to announce Build Night!, a new blog-series where you can follow—and take part in—every step of building our next-gen note-taking platform. Whether you’re a student, developer, or GenAI en...]]></description><link>https://blog.sanathswaroop.com/introducing-build-night</link><guid isPermaLink="true">https://blog.sanathswaroop.com/introducing-build-night</guid><category><![CDATA[genai]]></category><category><![CDATA[Build In Public]]></category><category><![CDATA[llm]]></category><category><![CDATA[notes]]></category><category><![CDATA[note-taking]]></category><category><![CDATA[notes-taking-app]]></category><category><![CDATA[notesapp]]></category><dc:creator><![CDATA[Sanath Swaroop Mulky]]></dc:creator><pubDate>Tue, 13 May 2025 08:19:59 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1747123862750/1ca62a21-3067-4225-9282-a4734222d412.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Marching straight from launch week’s excitement, I’m thrilled to announce <strong>Build Night!</strong>, a new blog-series where you can follow—and take part in—every step of building our next-gen note-taking platform. Whether you’re a student, developer, or GenAI enthusiast, this is your all-access pass to see code, features, and design decisions unfold in real-time.</p>
<hr />
<h2 id="heading-what-is-build-night">What Is Build Night?</h2>
<ul>
<li><p><strong>Live, in public</strong>: Every feature release, bug fix, and design tweak is published here first.</p>
</li>
<li><p><strong>User-driven</strong>: We’ll prioritize and prototype based on your feedback—no more guesswork.</p>
</li>
<li><p><strong>Weekly updates</strong>: Tune in each Tuesday evening for release notes, pipeline glimpses, and roadmap shifts.</p>
</li>
</ul>
<hr />
<h2 id="heading-first-demo-cantstudyhttpcantstudy-feature-walkthrough">First Demo: <a target="_blank" href="http://CANT.study">CANT.study</a> Feature Walkthrough</h2>
<p>To launch Build Night, we’re starting with the <a target="_blank" href="http://CANT.study"><strong>CANT.study</strong></a> <strong>→</strong> <a target="_blank" href="http://wehelpyou.study"><strong>wehelpyou.study</strong></a> demo session itself.</p>
<p><strong>Highlights:</strong></p>
<ol>
<li><p><strong>Start &amp; Stop Recording</strong> with one click – 30 s audio chunks keep things smooth.</p>
</li>
<li><p><strong>Context Correction Layer</strong> – NVIDIA Canary transcripts meet slide titles &amp; jargon.</p>
</li>
<li><p><strong>Iterative Summarization</strong> – Chunk-by-chunk summaries refined into one cohesive document.</p>
</li>
<li><p><strong>24-Hour Note Retention</strong> – Structured notes live for a day before auto-purge.</p>
</li>
<li><p><strong>Observability Built-In</strong> – Langfuse traces + Elasticsearch log streaming for instant debugging.</p>
</li>
</ol>
<hr />
<h2 id="heading-under-the-hood-architecture-snapshot">Under the Hood: Architecture Snapshot</h2>
<p>Every Build Night post will peel back a different layer of the stack. For tonight’s kickoff:</p>
<ul>
<li><p><strong>Frontend</strong>: React + Tailwind UI</p>
</li>
<li><p><strong>API Layer</strong>: Node.js/Express with JWT auth &amp; Langfuse integration</p>
</li>
<li><p><strong>Transcription</strong>: NVIDIA Canary → Context Correction Layer</p>
</li>
<li><p><strong>Summarization</strong>: Llama 3 8B-Instruct via chunk-iterate pipeline</p>
</li>
<li><p><strong>Observability</strong>: Langfuse tracing + Elasticsearch log streaming</p>
</li>
</ul>
<hr />
<h2 id="heading-how-you-can-join-in">How You Can Join In</h2>
<ol>
<li><p><strong>Try the Demo</strong>: Head to <a target="_blank" href="http://wehelpyou.study"><strong>wehelpyou.study</strong></a>, start a session, and let us know what you love (or what needs work!).</p>
</li>
<li><p><strong>Submit Feedback</strong>: Use the in-app “Feedback” button or drop a comment below this post.</p>
</li>
<li><p><strong>Vote Features</strong>: Each build night, we’ll open a poll—cast your vote for the next major feature.</p>
</li>
<li><p><strong>Follow the Series</strong>: Bookmark this blog; every Thursday evening we’ll post release notes, technical deep dives, and roadmap updates.</p>
</li>
</ol>
<hr />
<h2 id="heading-whats-next-on-the-pipeline">What’s Next on the Pipeline</h2>
<ul>
<li><p>Based on the feedback from <a target="_blank" href="http://CANT.study">CANT.study</a>, we’re kicking off a fresh experiment: <strong>building a generic note-taking platform in public</strong>, designed from the ground up with <a target="_blank" href="http://CANT.study">CANT.study</a>’s lessons baked in. If you’ve ever used Google Keep, Apple Notes, or any other note app, this will feel familiar—but supercharged by GenAI.</p>
<ul>
<li><p><strong>Open Releases</strong>: Every build version will drop publicly, complete with changelogs on improvements and upcoming work.</p>
</li>
<li><p><strong>Feature-By-Request</strong>: You tell us what matters most; we’ll scope, build, and prioritize based on your suggestions.</p>
</li>
<li><p><strong>Real-Time Feedback</strong>: Test each release as it lands, share your thoughts, and watch us adapt.</p>
</li>
</ul>
</li>
</ul>
<hr />
<blockquote>
<p><em>Stay tuned for our first full “Build Night” update next Tuesday.</em></p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[From Project to Product: How cant.study → wehelpyou.study Transforms Student Note‑Taking and Meetings with Open‑Source LLMs]]></title><description><![CDATA[For three semesters I worked as an on‑campus student note‑taker, clocking in at 8 a.m. every Monday, Wednesday, and Friday. While the paycheck helped, the routine exposed a deeper accessibility gap: many courses never secure a note‑taker, leaving NTI...]]></description><link>https://blog.sanathswaroop.com/from-project-to-product-how-cantstudy-wehelpyoustudy-transforms-student-notetaking-and-meetings-with-opensource-llms</link><guid isPermaLink="true">https://blog.sanathswaroop.com/from-project-to-product-how-cantstudy-wehelpyoustudy-transforms-student-notetaking-and-meetings-with-opensource-llms</guid><category><![CDATA[projects]]></category><category><![CDATA[ Rochester]]></category><category><![CDATA[genai]]></category><category><![CDATA[llm]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[React]]></category><category><![CDATA[vite]]></category><category><![CDATA[MongoDB]]></category><category><![CDATA[distributed system]]></category><category><![CDATA[Microservices]]></category><category><![CDATA[REST API]]></category><category><![CDATA[LLaMa]]></category><category><![CDATA[chatgpt]]></category><category><![CDATA[canary]]></category><category><![CDATA[NVIDIA]]></category><dc:creator><![CDATA[Sanath Swaroop Mulky]]></dc:creator><pubDate>Wed, 07 May 2025 17:33:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1746638932495/d851286c-25da-4cd0-ac76-06643d42d28c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For three semesters I worked as an <strong>on‑campus student note‑taker</strong>, clocking in at 8 a.m. every Monday, Wednesday, and Friday. While the paycheck helped, the routine exposed a deeper accessibility gap: many courses <em>never</em> secure a note‑taker, leaving NTID classmates to piece together scattered slides, hurried phone photos, or—too often—nothing at all.</p>
<blockquote>
<p><strong>Why can’t note‑taking be automated, reliable, and context‑aware?</strong></p>
</blockquote>
<p>That question became <strong>CANT — the Context‑Aware Note Taker</strong>. (<a target="_blank" href="http://cant.study/">http://cant.study/</a>)</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746638963035/e921c673-bfe9-4966-a43f-f83d7c8e9775.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-from-manual-notes-to-multimodal-intelligence">From Manual Notes to Multimodal Intelligence</h3>
<p>CANT’s earliest prototype hinged on three technical insights:</p>
<ol>
<li><p><strong>Context Correction Layer</strong><br /> Raw audio is first transcribed by <strong>NVIDIA Canary</strong>. A <em>Context Correction Layer</em> then aligns jargon, acronyms, and speaker disfluencies with the instructor’s actual slides and course materials, transforming transcripts into task‑ready text.</p>
</li>
<li><p><strong>Iterative Chunk Summarization</strong><br /> Lectures are streamed in ~10 k‑token chunks. Each chunk is summarized, then adjacent summaries are recursively refined to build a cohesive document—reducing LLM hallucinations and keeping sections tight and well‑labeled.</p>
</li>
<li><p><strong>24‑Hour Retention Window</strong><br /> To balance study needs with privacy, <strong>only the generated structured notes are retained</strong> on the server after the first 24 hours. The system performs an auto‑purge unless a student explicitly opts to archive them.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746724169417/7d8ac106-7025-40e7-a653-322227228e48.jpeg" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-design-decisions-amp-why-they-won">Design Decisions &amp; Why They Won</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Challenge</td><td>Design Decision</td><td>Why It Won</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Transcription accuracy</strong></td><td><strong>Context Correction Layer over Canary output</strong></td><td>Injects slide headings, technical terms, and abbreviations for near‑human readability.</td></tr>
<tr>
<td><strong>Token limits in LLMs</strong></td><td><strong>Chunk‑then‑iterate pipeline</strong></td><td>Produces tighter summaries and scales to multi‑hour seminars.</td></tr>
<tr>
<td><strong>Observability &amp; debugging</strong></td><td><strong>Langfuse tracing + log streams (file &amp; Elasticsearch)</strong></td><td>One-click replay of any session’s LLM calls; Kibana dashboards spot drift or latency spikes.</td></tr>
<tr>
<td><strong>Student trust &amp; compliance</strong></td><td><strong>24‑hour note retention</strong></td><td>Guarantees automatic cleanup without manual admin intervention.</td></tr>
</tbody>
</table>
</div><hr />
<h3 id="heading-a-very-brief-tour-of-the-deployment-pipeline">A (Very) Brief Tour of the Deployment Pipeline</h3>
<ol>
<li><p><strong>Frontend</strong> – React 18 + Tailwind. A single “Start Session” button streams 30 s audio chunks and visualizes progress.</p>
</li>
<li><p><strong>API Layer (Node.js + Express)</strong> – Handles JWT auth, writes Langfuse traces, handles other business logic.</p>
</li>
<li><p><strong>Sandbox Container</strong></p>
<ul>
<li><p><strong>Speech</strong>: NVIDIA Canary</p>
</li>
<li><p><strong>Context Parser</strong>: OCR for slides/PDFs</p>
</li>
<li><p><strong>LLM</strong>: Llama 3 8B‑Instruct‑128k behind an Nginx reverse proxy</p>
</li>
</ul>
</li>
<li><p><strong>Storage</strong> – <strong>MongoDB</strong> for transcripts &amp; structured notes (ephemeral collections respect the 24‑hour policy).</p>
</li>
<li><p><strong>CI/CD</strong> – GitHub Actions → Docker Buildx → zero‑downtime blue‑green deploys on <strong>Linode’s dedicated GPU cloud</strong>.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746639079748/b8478e83-c957-491c-ad55-5a993af35cc7.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-launch-day-30-april-2025-capstone-poster-amp-public-release">Launch Day: 30 April 2025 — Capstone Poster &amp; Public Release</h3>
<p>We unveiled CANT during the RIT Data Science Capstone poster session on <strong>30 April 2025</strong> and simultaneously opened public access:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746639103959/eaa4e4db-705c-4374-bc29-a20cc401f1dd.png" alt class="image--center mx-auto" /></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Domain</td><td>Purpose</td></tr>
</thead>
<tbody>
<tr>
<td><a target="_blank" href="http://cant.study"><strong>cant.study</strong></a></td><td>The problem space—where students <em>can’t</em> study without equitable notes.</td></tr>
<tr>
<td><a target="_blank" href="http://wehelpyou.study"><strong>wehelpyou.study</strong></a></td><td>The solution space—CANT’s live app and onboarding hub.</td></tr>
</tbody>
</table>
</div><div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">Try our app for free @ https://app.cant.study</div>
</div>

<p><strong>First‑week impact</strong></p>
<ul>
<li><p><strong>42 new registered users</strong></p>
</li>
<li><p><strong>100 + unique visitors</strong></p>
</li>
<li><p><strong>8 + hours of audio captured</strong></p>
</li>
<li><p><strong>14 structured note sets generated</strong></p>
</li>
</ul>
<p><strong>Improvements and Results</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746724215128/1e4ddcab-e7ad-41a0-89b0-51a88f066825.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-whats-next">What’s Next?</h3>
<ol>
<li><p><strong>Federated Learning</strong><br /> Train the Context Correction Layer on‑device so each cohort’s jargon improves future transcripts—without centralizing raw data.</p>
</li>
<li><p><strong>Edge Deployment using Service Workers</strong><br /> Packaging the full pipeline for offline field trips and patchy Wi‑Fi environments.</p>
</li>
<li><p><strong>Open‑Source Modules</strong><br /> Releasing the chunk‑iterative summarizer, Context Aware ASR Correction so researchers can plug in custom LLMs.</p>
</li>
</ol>
<p>Thank you for following our journey from sleepy lecture halls to an AI‑powered accessibility platform. Have feedback or want to contribute? Visit <a target="_blank" href="http://app.cant.study"><strong>cant.study</strong></a>—because <em>when you</em> <a target="_blank" href="http://CANT.study"><em>CANT.study</em></a><em>,</em> <a target="_blank" href="https://wehelpyou.study/"><em>WEHELPYOU.study</em>.</a></p>
<p><strong>Email: gmail@sanathswaroop.com</strong></p>
<p><strong>Contact: +1 (585)-565-2567</strong></p>
]]></content:encoded></item><item><title><![CDATA[Building the Future of UAP Truth: My Experience at the SF UAP Hackathon]]></title><description><![CDATA[In mid‑April 2025, I joined the inaugural SF UAP Hackathon aboard the USS Hornet in Alameda, California—a convergence of engineers, data scientists, and enthusiasts all driven to demystify Unidentified Aerial Phenomena. Over the weekend, our team imm...]]></description><link>https://blog.sanathswaroop.com/sfuaphackathon</link><guid isPermaLink="true">https://blog.sanathswaroop.com/sfuaphackathon</guid><category><![CDATA[sfo]]></category><category><![CDATA[Pelco]]></category><category><![CDATA[ufo]]></category><category><![CDATA[Alien Civilization]]></category><category><![CDATA[alien]]></category><category><![CDATA[san francisco]]></category><category><![CDATA[hackathon]]></category><category><![CDATA[Raspberry Pi]]></category><dc:creator><![CDATA[Sanath Swaroop Mulky]]></dc:creator><pubDate>Mon, 21 Apr 2025 22:15:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745272617631/af7a6f49-ac6e-4894-9eb4-0e0caad76824.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In mid‑April 2025, I joined the inaugural SF UAP Hackathon aboard the USS Hornet in Alameda, California—a convergence of engineers, data scientists, and enthusiasts all driven to demystify Unidentified Aerial Phenomena. Over the weekend, our team immersed ourselves in cutting‑edge hardware, decentralized protocols, and real‑time analytics to forge new pathways toward transparency and truth in UAP research.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745272662635/fe634ceb-c361-40dd-b913-71641e91e24c.webp" alt class="image--center mx-auto" /></p>
<h2 id="heading-choosing-the-humanity-softlanding-track">Choosing the “Humanity Softlanding” Track</h2>
<p>From the opening keynote, it was clear that no single discipline holds all the answers. We chose the <strong>Humanity Softlanding</strong> track, which called for a <strong>permissionless, bias‑resistant network</strong> to collect, verify, and reward UAP observations on‑chain. Our guiding principles:</p>
<ul>
<li><p><strong>Open Contribution:</strong> Anyone—satellite operators, backyard observers, or academic teams—should be able to feed data into the system without gatekeepers.</p>
</li>
<li><p><strong>Decentralized Verification:</strong> Misinformation is rampant; we built consensus‑based checks to elevate credible reports.</p>
</li>
<li><p><strong>On‑Chain Incentives:</strong> Contributors earn tokens tied to the quality and uniqueness of their data.</p>
</li>
<li><p><strong>Immutable Repository:</strong> A global ledger holds all verified UAP intelligence, accessible for future study.</p>
</li>
</ul>
<p>By framing these goals as our solution rather than a problem statement, we steered clear of abstract challenges and focused on building a tangible proof‑of‑concept.</p>
<h2 id="heading-what-we-built">What We Built</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745273280934/c2729ba5-e45a-4c92-94f7-a8f7b7b5958b.jpeg" alt class="image--center mx-auto" /></p>
<p>Armed with a <strong>CanaKit Raspberry Pi 5 Starter Kit</strong> and a <strong>2‑Axis Pan‑Tilt Camera Module</strong>, we deployed a network of low‑cost, programmable sensors capable of continuous sky scans. The real highlight was integrating a <strong>Pelco Thermal Imaging PTZ Camera</strong> (valued at $20,000) to simulate high‑precision observations:</p>
<ol>
<li><p><strong>Detection &amp; Tracking Pipeline:</strong> Custom Python scripts on the Pi streamed thermal video to a local server, triggering snapshots whenever unusual signatures appeared.</p>
</li>
<li><p><strong>Data Onboarding:</strong> Each snapshot was hashed and timestamped, then published to our smart contract on a testnet.</p>
</li>
<li><p><strong>Decentralized Storage:</strong> We stored every image on IPFS for censorship‑resistant archival.</p>
</li>
<li><p><strong>Reputation Engine:</strong> Contributors could upvote or challenge each sighting. Only snapshots reaching a trust‑threshold minted reward tokens.</p>
</li>
<li><p><strong>Dashboard:</strong> A React front‑end visualized live feeds, recent detections, and contributor leaderboards.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745273966550/69a962b2-33e2-4aed-a0c5-684fec114c28.png" alt class="image--center mx-auto" /></p>
<p>By hackathon’s end, we had a fully operational demo: a decentralized UAP detection network that recognized, verified, and rewarded sightings in real time.</p>
<h2 id="heading-mentorship-and-essential-tools">Mentorship and Essential Tools</h2>
<p>We owe a huge debt to <strong>David Hooper</strong>, founder of a stealth UAP defense startup. His hands‑on guidance helped us refine our incentive model and vet our verification logic. David’s support turned our baseline design into a scalable, field‑ready system.</p>
<h2 id="heading-additional-learnings-discovering-the-tridactyls">Additional Learnings: Discovering the Tridactyls</h2>
<p><img src="https://tastio-testing.ams3.digitaloceanspaces.com/Tridactyls/assets/img/Maria/profile.jpeg" alt="https://tridactyls.org/" /></p>
<p>While most of our time was spent coding and circuit‑molding, I dove into the <strong>Biologics Track</strong>, where I stumbled upon <a target="_blank" href="https://tridactyls.org/specimens">Tridactyls.org/sfuap</a>. The site catalogs “Tridactyls”—enigmatic, three‑toed creatures rumored to inhabit remote ecosystems. Learning about these potential lifeforms reminded me that unexplained phenomena aren’t limited to the skies; the natural world still holds secrets as profound as any UAP. That curiosity filtered back into our project: if biology can surprise us at ground level, why not in the troposphere?</p>
<h2 id="heading-the-ce5-exercise-and-my-uap-sighting">The CE5 Exercise and My UAP Sighting</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745272793813/21432b36-3d03-47ed-9bc2-1f4a7edaf6f3.webp" alt class="image--center mx-auto" /></p>
<p>On the evening of <strong>April 19th at 9 PM</strong>, our team paused development to experiment with <strong>CE5 (Close Encounters of the Fifth Kind)</strong>—a meditation‑based protocol for human‑initiated contact. Skeptical but curious, I followed the guided visualization. We observed an object in the sky—what we later confirmed as a UAP. The object moved in ways that defied conventional explanations, and although we captured clear optical images, we were unable to record its thermal signature. We uploaded those optical snapshots to IPFS and validated them on‑chain. Witnessing a live UAP during a CE5 session was both surreal and validating—proof that our network could bridge human intention, observational technology, and decentralized verification.</p>
<h2 id="heading-a-surreal-setting-the-uss-hornet-experience">A Surreal Setting: The USS Hornet Experience</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1745272705907/b40d7bd4-5512-4ece-aaeb-254e3f7fc60f.jpeg" alt class="image--center mx-auto" /></p>
<p>Hosting a hackathon on the USS Hornet was unforgettable. This historic aircraft carrier not only served in World War II and Vietnam but famously recovered Apollo 11’s command module on July 24, 1969. Walking the same deck where Neil Armstrong stepped down from the capsule and made his way to the quarantine chamber was profoundly moving.</p>
<p>The sense of history and exploration in the air fueled our team’s ambition—reminding us that pushing into the unknown has always defined human progress.</p>
<h2 id="heading-looking-ahead">Looking Ahead</h2>
<p>The SF UAP Hackathon was more than a sprint; it was a proof point that collaboration, open protocols, and a bit of healthy skepticism can push the envelope on UAP truth. As I integrate lessons learned into my ongoing research, I’m excited to explore:</p>
<ul>
<li><p><strong>Edge AI:</strong> Deploying on‑device inference to flag anomalies without constant server connectivity.</p>
</li>
<li><p><strong>Cross‑Discipline Data Fusion:</strong> Merging biosensor, acoustic, and radio‑frequency inputs alongside thermal imaging.</p>
</li>
<li><p><strong>Broader Community Engagement:</strong> Incentivizing amateur astronomers and citizen scientists through gamified discovery.</p>
</li>
</ul>
<p>Standing on the Hornet’s deck, I felt we were at the dawn of a new era—one where transparency, technology, and collective curiosity illuminate the unknown.</p>
]]></content:encoded></item><item><title><![CDATA[Beyond Manual Thought: A Journey into Tool-Assisted AI Reasoning]]></title><description><![CDATA[Not long ago, I shared a piece on how technical solutions evolve—think of it as the journey beyond REST in microservices. In that same spirit, the way AI handles computation has also undergone a transformation. Sometimes these models compute huge num...]]></description><link>https://blog.sanathswaroop.com/beyond-manual-thought-a-journey-into-tool-assisted-ai-reasoning</link><guid isPermaLink="true">https://blog.sanathswaroop.com/beyond-manual-thought-a-journey-into-tool-assisted-ai-reasoning</guid><category><![CDATA[AI]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[aitools]]></category><category><![CDATA[chatgpt]]></category><category><![CDATA[O1]]></category><category><![CDATA[DeepSeekR1]]></category><category><![CDATA[#Grok3 ]]></category><category><![CDATA[claude.ai]]></category><category><![CDATA[Thinking]]></category><category><![CDATA[large language models]]></category><dc:creator><![CDATA[Sanath Swaroop Mulky]]></dc:creator><pubDate>Mon, 10 Mar 2025 23:18:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1741648593796/c3883878-8228-4ce1-a5d3-771e38e299ce.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Not long ago, I shared a piece on how technical solutions evolve—think of it as the journey beyond REST in microservices. In that same spirit, the way AI handles computation has also undergone a transformation. Sometimes these models compute huge numbers or factorize massive integers at breakneck speed, other times they walk us through each methodical step. Below is a look at two real-world scenarios that highlight how and why these differences matter.</p>
<hr />
<h2 id="heading-1-why-ai-computation-matters">1. Why AI Computation Matters</h2>
<p>When we talk about AI, most people envision chatbots or recommendation engines. But under the hood, these systems often need to perform intense mathematical operations. Whether it’s multiplying large numbers, factoring gargantuan integers, or evaluating complex expressions, <strong>how</strong> an AI performs these tasks reveals much about its design philosophy:</p>
<ul>
<li><p><strong>Fast, Tool-Assisted Computation</strong>: Calls upon specialized libraries or code runners for near-instant answers.</p>
</li>
<li><p><strong>Manual, Step-by-Step Explanation</strong>: Emulates human reasoning, typically slower but offers transparency and deeper insight into each step.</p>
</li>
</ul>
<h2 id="heading-2-the-multiplication-test">2. The Multiplication Test</h2>
<p>Our first example involved a seemingly straightforward arithmetic query:</p>
<pre><code class="lang-plaintext">Calculate 25665566*6778765556
</code></pre>
<h3 id="heading-chatgpts-instant-approach">ChatGPT’s Instant Approach</h3>
<pre><code class="lang-python">result = <span class="hljs-number">25665566</span> * <span class="hljs-number">6778765556</span>
result

<span class="hljs-number">173980854776044696</span>
</code></pre>
<ul>
<li><p><strong>The Strategy</strong>: Called a built-in “code runner” with a single Python snippet to multiply these large integers.</p>
</li>
<li><p><strong>The Result</strong>: A near-immediate output.</p>
</li>
<li><p><strong>The “Why?”</strong>: It recognized the task was purely computational and delegated it to a reliable tool.</p>
</li>
</ul>
<h3 id="heading-deepseeks-amp-grok-methodical-route">Deepseek’s &amp; Grok Methodical Route</h3>
<details><summary>DeekSeek R1 &lt;/think&gt;</summary><div data-type="detailsContent">Alright, so I need to calculate the product of 25,665,566 and 6,778,765,556. Hmm, that's a pretty big multiplication. Let me think about how to approach this. I know that multiplying large numbers can be tricky, especially if I don't have a calculator handy. Maybe I can break it down using the distributive property or some estimation first to check my work later. First, let me write down the numbers to make sure I have them right: 25,665,566 and 6,778,765,556. Both are in the billions range, so their product is going to be really large. I might need to use some systematic method here. Let me recall that when multiplying two numbers, you can break them into parts. For example, using the standard algorithm where you multiply each digit and carry over. But doing that manually for such large numbers would be time-consuming and prone to errors. Maybe there's a smarter way. Alternatively, I can use the fact that 6,778,765,556 is close to 6.778765556 billion, and 25,665,566 is approximately 25.665566 million. But converting them into scientific notation might make it easier. Let's try that. 25,665,566 can be written as 2.5665566 × 10^7, and 6,778,765,556 is 6.778765556 × 10^9………………..</div></details><details><summary>Grok Thinking</summary><div data-type="detailsContent">So I have this multiplication problem: 25,665,566 times 6,778,765,556. Hmm, those are pretty big numbers. I'm a bit rusty on multiplying large numbers, but I think I can handle it. Let me see. First, I remember that when multiplying two numbers, I can break it down into smaller parts. Maybe I can use the distributive property or something like that. Wait, actually, the standard way is to multiply each digit of one number by each digit of the other number and then add them up, right? But that seems tedious with such large numbers. Alternatively, perhaps I can simplify the numbers somehow. Let me check if there are any patterns or factors that can make this easier. Let's see, 25,665,566 and 6,778,765,556. Hmm, not sure if they share any common factors that I can factor out. Wait a minute, maybe I can approximate the numbers to get a rough estimate first. That might help me check if my final answer makes sense. So, 25,665,566 is roughly 25 million, and 6,778,765,556 is roughly 6.8 billion. Multiplying those, 25 million times 6.8 billion. Well, 25 times 6.8 is 170, and then million times billion is trillion, so roughly 170 trillion. But actually, wait, million times billion is 10^6 times 10^9, which is 10^15, so quadrillion. Wait, no: million is 10^6, billion is 10^9, so 10^6 <em>10^9 = 10^15, which is quadrillion. But in the context of the numbers, 25 million is 2.510^7, and 6.8 billion is 6.810^9. So 2.510^7 6.8</em>10^9 = (2.5<em>6.8)10^(7+9) = 1710^16, which is 1.7</em>10^17. Wait, that doesn't seem right. Wait, 10^16 is ten quadrillion, but let's confirm: 10^15 is quadrillion, so 10^16 is ten quadrillion, but actually, in standard naming, 10^15 is quadrillion, 10^18 is quintillion, so 10^16 is 10 quadrillion. But perhaps it's better to think in terms of powers of ten…………</div></details>

<ul>
<li><p><strong>The Strategy</strong>: Simulated each step of long multiplication—similar to how you might solve it by hand, carrying digits and forming partial sums.</p>
</li>
<li><p><strong>The Result</strong>: Accurate, but took a notable amount of time.</p>
</li>
<li><p><strong>The “Why?”</strong>: Focused on clarity and a breakdown of logic at each phase, sacrificing speed for a teaching-like demonstration.</p>
</li>
</ul>
<h2 id="heading-3-the-factorization-challenge">3. The Factorization Challenge</h2>
<p>Soon after, we posed a tougher problem: factorizing a massive integer:</p>
<p>Prompt: Factorize 987654321987654321</p>
<p>Output :</p>
<p>$$987654321987654321=3^2 ×7×11×13×17^2 ×19×52579×379721$$</p><h3 id="heading-chatgpts-swift-response">ChatGPT’s Swift Response</h3>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> sympy

num = <span class="hljs-number">987654321987654321</span>
factors = sympy.factorint(num)
factors

{<span class="hljs-number">3</span>: <span class="hljs-number">2</span>, <span class="hljs-number">7</span>: <span class="hljs-number">1</span>, <span class="hljs-number">11</span>: <span class="hljs-number">1</span>, <span class="hljs-number">13</span>: <span class="hljs-number">1</span>, <span class="hljs-number">17</span>: <span class="hljs-number">2</span>, <span class="hljs-number">19</span>: <span class="hljs-number">1</span>, <span class="hljs-number">52579</span>: <span class="hljs-number">1</span>, <span class="hljs-number">379721</span>: <span class="hljs-number">1</span>}
</code></pre>
<ul>
<li><p><strong>Tool Used</strong>: A code snippet employing the <code>sympy</code> library in Python.</p>
</li>
<li><p><strong>Time Taken</strong>: Just a couple of seconds—an immediate resolution of prime factors.</p>
</li>
<li><p><strong>Key Insight</strong>: Similar to how advanced communication protocols drastically improve throughput, specialized libraries handle complex math in a fraction of the time.</p>
</li>
</ul>
<h3 id="heading-deepseeks-longer-journey">Deepseek’s Longer Journey</h3>
<details><summary>DeepSeek R1 &lt;/think&gt;</summary><div data-type="detailsContent">Okay, so I need to factorize 987654321987654321. Hmm, that's a pretty big number. Let me start by writing it down clearly: 987,654,321,987,654,321. Wait, I notice that this number seems to have a repeating pattern. It's 987654321 followed by another 987654321. So maybe it's constructed by repeating the sequence 987654321 twice. That might help in factoring it somehow. First, let me recall that numbers with repeating patterns can sometimes be expressed in a form that allows factoring. For example, a number like 123123 can be written as 123 <em> 1001. Similarly, maybe this number can be written as 987654321 multiplied by some factor. Let me check: If the original number is 987654321987654321, that's 987654321 followed by another 987654321. So, how many digits is that? The first part, 987654321, has 9 digits. So repeating it twice would make 18 digits. Let me confirm: 987,654,321,987,654,321 is indeed 18 digits. So, similar to the 123123 example, this can be written as 987654321 <em>10^9 + 987654321. That simplifies to 987654321 </em>(10^9 + 1). So, the number N = 987654321987654321 can be written as N = 987654321 </em> (10^9 + 1). That seems like a good start. Now, if I can factor both 987654321 and (10^9 + 1), then I can combine their factors to get the full prime factorization of N. First, let's factor 987654321. Let me recall if I know anything about this number. Wait, 987654321 is a well-known number, it's the sequence of digits from 9 down to 1. I think this number has some factors. Let me try dividing it by small primes. Starting with 2: 987654321 is odd, so not divisible by 2. Next, 3: sum of digits. Let's calculate the sum: 9+8+7+6+5+4+3+2+1 = 45. 45 is divisible by 9 and 3, so the number is divisible by 9 and 3. Let's divide by 3 first.</div></details>

<ul>
<li><p><strong>Method</strong>: Gradual, possibly trial-division or more stepwise primality checks.</p>
</li>
<li><p><strong>Time Taken</strong>: 298 seconds—nearly five minutes.</p>
</li>
<li><p><strong>Why It Matters</strong>: Though it arrived at the same factorization, the approach highlights a commitment to manual or semi-manual problem-solving.</p>
</li>
</ul>
<p>Final Output:</p>
<p>$$\boxed{3^2 \times 7 \times 11 \times 13 \times 17^2 \times 19 \times 52579 \times 379721}$$</p><h2 id="heading-4-fast-vs-thorough-a-tale-of-two-approaches">4. Fast vs. Thorough: A Tale of Two Approaches</h2>
<p>Whether it’s the multiplication or factorization scenario, the contrast stands out:</p>
<p><strong>Tool-Based Calculation</strong></p>
<ul>
<li><p><strong>Pros</strong>: Lightning speed, reliable results, minimal chance of slip-ups.</p>
</li>
<li><p><strong>Cons</strong>: Limited transparency; the AI just “knows” the final answer without showing each step.</p>
</li>
</ul>
<p><strong>Manual “Thinking”</strong></p>
<ul>
<li><p><strong>Pros</strong>: Explains every detail, ideal for teaching and verifying each arithmetic operation.</p>
</li>
<li><p><strong>Cons</strong>: Time-consuming, higher risk of compounding errors if a single step is off.</p>
</li>
</ul>
<h2 id="heading-5-the-final-word-matching-tools-to-tasks">5. The Final Word: Matching Tools to Tasks</h2>
<p>Much like choosing <a target="_blank" href="https://blog.sanathswaroop.com/to-rest-or-to-not"><strong>RabbitMQ</strong> or <strong>gRPC</strong> for the right use case in microservices</a>, deciding between tool-based or stepwise AI reasoning depends on your goals:</p>
<ul>
<li><p>If you value <strong>speed and reliability</strong>— A tool based solution is more effective</p>
</li>
<li><p>If you need <strong>explanation and clarity</strong>—embrace the methodical, step-by-step breakdown.</p>
</li>
</ul>
<p>Both approaches have their place, and both can produce the correct result. The key is understanding which strategy suits your scenario. When seconds matter, harness specialized libraries. When teaching or diagnosis is the main objective, a thorough, human-like walkthrough might be best.</p>
]]></content:encoded></item><item><title><![CDATA[Beyond REST: A Journey into the World of gRPC and Asynchronous Messaging]]></title><description><![CDATA[There was a time when REST was the defacto standard of our microservices landscape—a familiar language that all our services spoke fluently. In the early days, REST’s straightforward HTTP verbs and JSON payloads brought us together in a harmonious co...]]></description><link>https://blog.sanathswaroop.com/to-rest-or-to-not</link><guid isPermaLink="true">https://blog.sanathswaroop.com/to-rest-or-to-not</guid><category><![CDATA[REST API]]></category><category><![CDATA[gRPC]]></category><category><![CDATA[Asynchronous Communication ]]></category><category><![CDATA[rabbitmq]]></category><dc:creator><![CDATA[Sanath Swaroop Mulky]]></dc:creator><pubDate>Wed, 05 Feb 2025 04:31:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738732744932/fe2d56db-0d84-4ccd-b1b9-c8a401cdb302.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There was a time when REST was the defacto standard of our microservices landscape—a familiar language that all our services spoke fluently. In the early days, REST’s straightforward HTTP verbs and JSON payloads brought us together in a harmonious conversation. But as our systems grew more complex and our throughput demands increased, cracks began to show in this once-trusted method of communication.</p>
<h2 id="heading-the-restful-dilemma">The RESTful Dilemma</h2>
<p>Imagine an office where every department must wait on a single phone line to get the information they need. That’s what REST felt like when our microservices had to engage in synchronous, tightly coupled exchanges. Each service call was like placing a call on that single line—if one department (or service) slowed down, the whole operation lagged behind.</p>
<ul>
<li><p><strong>High Overhead and Latency:</strong><br />  While human-readable, JSON is inherently verbose. The constant back-and-forth of large payloads led to unnecessary network chatter and increased latency.</p>
</li>
<li><p><strong>Tight Coupling and Cascading Failures:</strong><br />  One unresponsive service could trigger a domino effect, impacting the reliability of the entire system.</p>
</li>
<li><p><strong>Versioning Challenges:</strong><br />  Maintaining backward compatibility became an ongoing struggle as individual services evolved, often turning routine updates into a logistical nightmare.</p>
</li>
</ul>
<p>These issues were not just theoretical. They impacted real-world systems, making REST a less-than-ideal choice for modern, high-demand applications.</p>
<h2 id="heading-the-power-of-asynchronous-messaging-a-lesson-from-npci">The Power of Asynchronous Messaging: A Lesson from NPCI</h2>
<p>My experience at the National Payments Corporation of India (NPCI) taught me firsthand why REST was not a viable option for certain scenarios. We were developing a high-transaction payments application that had to handle an incredibly high TPS (transactions per second). The synchronous nature of REST was a bottleneck—a recipe for delays and failures under heavy load.</p>
<p>To address this, we chose <strong>RabbitMQ</strong> for inter-service communication. By embracing asynchronous messaging, our services could decouple their interactions. One service would drop a message into a queue and move on, while another would pick it up and process it at its own pace.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738728840270/dcfa6b2d-f6c1-4650-82e3-a63bac64c842.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p><strong>Handled High TPS Gracefully:</strong><br />  The message queuing system absorbed the burst of transactions, ensuring that no single service became overwhelmed.</p>
</li>
<li><p><strong>Enhanced Resilience:</strong><br />  Even if one component experienced downtime, the messages would patiently wait in the queue until processing could resume, eliminating the risk of cascading failures.</p>
</li>
<li><p><strong>Improved Scalability:</strong><br />  Decoupled services meant that each could scale independently, without the pressure of synchronous dependencies.</p>
</li>
</ul>
<p>In our payments application, RabbitMQ not only provided the necessary throughput but also delivered the robustness required for a mission-critical financial system—a clear testament to why REST was not the right tool for this job.</p>
<h2 id="heading-a-new-chapter-begins-embracing-grpc-for-microservices">A New Chapter Begins: Embracing gRPC for Microservices</h2>
<p>While asynchronous messaging solved many of our challenges at NPCI, there were other scenarios where we needed rapid, low-latency communication between services. While working at StaTwig, I embarked on an exciting proof-of-concept that leveraged <strong>gRPC</strong> for inter-service communication among our user, shipment, orders, and inventory services.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738724639254/bcdd2fc8-8f6c-4cbb-87c3-738cb7d036bc.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-why-grpc-transformed-our-communication">Why gRPC Transformed Our Communication</h3>
<ul>
<li><p><strong>Low Latency and High Performance:</strong><br />  gRPC’s use of HTTP/2 and binary serialization (via Protocol Buffers) meant that messages were not only smaller but also faster to transmit. This dramatically reduced the communication delays that were all too common with REST.</p>
</li>
<li><p><strong>Streaming and Real-Time Data:</strong><br />  With gRPC, we could implement bi-directional streaming. This was a game-changer for services that required continuous updates—imagine a real-time dashboard that never missed a beat.</p>
</li>
<li><p><strong>Clear and Rigid Contracts:</strong><br />  The strict API contracts enforced by gRPC ensured that every service knew exactly what to expect. This clarity reduced errors and made maintenance significantly easier.</p>
</li>
</ul>
<p>My PoC at StaTwig revealed measurable improvements: reduced latency, more efficient data handling, and a smoother overall user experience. It demonstrated that for scenarios demanding immediate, interactive communication, gRPC was far superior to the traditional REST approach.</p>
<h2 id="heading-the-final-word-choosing-the-right-tool-for-the-job">The Final Word: Choosing the Right Tool for the Job</h2>
<p>Both RabbitMQ and gRPC represent significant advances over REST when it comes to building modern microservice architectures. They aren’t direct competitors but rather complementary tools, each addressing different communication challenges:</p>
<ul>
<li><p><strong>RabbitMQ</strong> is ideal for high-throughput, asynchronous environments where resilience and decoupling are paramount. My experience at NPCI proved that when every millisecond counts and scalability is a must, asynchronous messaging is the way forward.</p>
</li>
<li><p><strong>gRPC</strong> excels in scenarios that demand low latency and real-time interactions. At StaTwig, our PoC highlighted how gRPC could streamline communications between critical services, reducing delays and bolstering performance.</p>
</li>
</ul>
<p><em>In our journey as architects and developers, the real art lies in choosing the right tool for the task at hand. Embracing RabbitMQ and gRPC has allowed us to build systems that not only meet today’s demands but also scale gracefully into the future.</em></p>
]]></content:encoded></item></channel></rss>