<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Manish Sinha]]></title><description><![CDATA[Thoughts, ideas, and critical explanation of Software Engineering]]></description><link>https://blog.manishsinha.me</link><generator>Substack</generator><lastBuildDate>Fri, 03 Apr 2026 20:13:07 GMT</lastBuildDate><atom:link href="https://blog.manishsinha.me/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Manish Sinha]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[manishsinha27@gmail.com]]></webMaster><itunes:owner><itunes:email><![CDATA[manishsinha27@gmail.com]]></itunes:email><itunes:name><![CDATA[Manish Sinha]]></itunes:name></itunes:owner><itunes:author><![CDATA[Manish Sinha]]></itunes:author><googleplay:owner><![CDATA[manishsinha27@gmail.com]]></googleplay:owner><googleplay:email><![CDATA[manishsinha27@gmail.com]]></googleplay:email><googleplay:author><![CDATA[Manish Sinha]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Entry-Level Work Is Being Restructured Faster Than Anyone Admits]]></title><description><![CDATA[Not eliminated. Restructured. The difference matters and it is not comforting]]></description><link>https://blog.manishsinha.me/p/entry-level-work-is-being-restructured</link><guid isPermaLink="false">https://blog.manishsinha.me/p/entry-level-work-is-being-restructured</guid><dc:creator><![CDATA[Manish Sinha]]></dc:creator><pubDate>Fri, 20 Mar 2026 16:13:14 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/692598e3-01d0-4bfa-95bd-b943b9937257_2048x2048.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The public debate about AI and jobs splits into two camps, both of which are wrong. The catastrophists say mass unemployment is weeks away. The dismissers say AI creates more jobs than it destroys, and history proves it. Neither framing captures what is actually happening to someone who graduated in 2024 and is looking for their first professional job.</p><p>Entry-level white-collar work exists on a spectrum. At one end are the roles that are almost entirely task execution: data entry, basic content production, routine research compilation, document formatting, and first-pass screening. These are being absorbed into workflows that one person now manages with AI assistance. At the other end are roles that use task execution as training for something harder: learning to read a brief properly, understanding what good output looks like, and developing the judgment that makes someone a valuable senior contributor. These are not the same thing, and conflating them misses the real problem.</p><p>What AI is doing is hollowing out the first category faster than employers are redesigning the second. Companies are not replacing 10 junior analysts with 10 AI analysts and one senior. They are replacing 10 junior analysts with AI and not thinking carefully about where the next generation of senior analysts comes from. The developmental pipeline depends on the early repetitive work that AI is now eating.</p><p>The early career AI exposure data is precise about this. Employment in the most AI-exposed entry-level occupations fell 13% in 2025. That is not theoretical displacement. It is people who did not get jobs that would have existed without the technology. The roles above that level are largely stable. The path to those roles is narrowing.</p><p>There is no clean resolution here. You cannot preserve inefficient work just to train the next generation of workers. But companies that automate entry-level functions without redesigning how people develop into senior roles will find themselves facing a talent gap in four to six years that is expensive and slow to fix.</p><p>The most practical thing individuals can do is understand which part of their current role involves execution and which involves judgment. The execution is being compressed. The judgment is not. Investing in developing judgment while the market still rewards execution is not comfortable advice, but it is accurate.</p>]]></content:encoded></item><item><title><![CDATA[Agents Are Replacing Workflows, Not Just Tasks]]></title><description><![CDATA[The job displacement story everyone is telling is too small]]></description><link>https://blog.manishsinha.me/p/agents-are-replacing-workflows-not</link><guid isPermaLink="false">https://blog.manishsinha.me/p/agents-are-replacing-workflows-not</guid><dc:creator><![CDATA[Manish Sinha]]></dc:creator><pubDate>Thu, 19 Mar 2026 21:04:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0b226b0d-c062-4c08-a4ea-f07c5d2b743f_6240x4160.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Every conversation about AI and jobs focuses on individual tasks. AI can write emails. AI can summarize documents. AI can generate first drafts. These are real capabilities, and they are making individual workers more productive at specific activities. But the more consequential shift is happening at the workflow level and is undercovered.</p><p>A workflow is a sequence of tasks, often across multiple people, that produces a business outcome. Customer support escalation: message comes in, gets triaged, routed, answered, logged, and closed. Internal reporting: data gets pulled, transformed, written up, reviewed, and distributed. Recruitment screening: applications arrive, get filtered, ranked, summarized, and forwarded. These workflows did not require one smart person. They required several people with defined roles coordinating across a handoff chain.</p><p>AI agents can now handle that entire chain. Not all of it, not always reliably, and not without oversight. But the workflow architecture changes. Instead of five people touching a process sequentially, you might have one person supervising an agent that runs the whole thing. Jack Dorsey at Block was pointed about this when he cited it as a factor in laying off nearly half his workforce: smaller, flatter teams enabled by AI are doing the same work.</p><p>The Gartner finding that only 20% of companies actually reduced headcount due to AI is accurate yet somewhat misleading. The bigger effect is that companies are not backfilling when people leave. The workflow still runs. Fewer people run it. That does not show up as a layoff. It shows up as a hiring freeze and a productivity number that looks good on a slide.</p><p>Middle-layer roles are feeling this first. Positions built primarily on coordination, relay, and report generation are shrinking because those are the functions agents execute cleanly. Senior judgment roles and roles requiring physical presence or emotional trust are holding. Entry-level roles, which were the training ground for senior roles, are where the disruption is most structurally significant.</p><p>If you are building an AI product and you are thinking about which tasks it automates, zoom out. The question worth asking is which workflows it replaces and what the headcount implications of that replacement are. That is a harder question and a more honest one.</p>]]></content:encoded></item><item><title><![CDATA[The Framework Was Patching the Model, Not Helping You]]></title><description><![CDATA[LangChain made sense in 2023. The model limitations it was compensating for no longer exist]]></description><link>https://blog.manishsinha.me/p/the-framework-was-patching-the-model</link><guid isPermaLink="false">https://blog.manishsinha.me/p/the-framework-was-patching-the-model</guid><dc:creator><![CDATA[Manish Sinha]]></dc:creator><pubDate>Wed, 18 Mar 2026 18:01:56 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/9924c6d0-4cef-4461-be25-28fa721a7c60_3712x5568.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>LangChain shipped at the right moment. Models in 2022 and 2023 were genuinely limited in ways that required scaffolding. Short context windows meant you needed careful document chunking and retrieval logic. Inconsistent tool use meant you needed chains to structure multi-step behavior. Memory had to be managed externally because models had none. LangChain handled all of that, and doing so was real work.</p><p>The models caught up. Claude Opus 4.6, GPT-5, and Gemini 3.1 Pro handle multi-step reasoning, tool selection, retrieval, and context management natively in ways the 2023 models did not. The scaffold was compensating for model limitations. Most of those limitations are gone.</p><p>What you get now, if you reach for LangChain by default, is an abstraction layer over a model that does not need abstracting. The debugging experience when something breaks is worse than with direct API calls, because you are now tracing through the framework&#8217;s logic to find the problem. The surface area is larger. The control is lower. For the teams I know that have removed it and replaced it with direct API calls, the debugging time dropped sharply.</p><p>This is not a criticism of LangChain as a project. It shipped a stable 1.0 in October 2025. LangGraph handles genuinely complex stateful multi-agent workflows well. LangSmith is a good observability tool. If you are building something that needs those specific capabilities, the ecosystem earns its weight.</p><p>The issue is that teams use LangChain as a default starting point when they do not have a specific reason to. They add a dependency, they will spend months understanding before they discover the underlying model could have handled their use case with 200 lines of direct API calls. The abstraction was not solving their problem. It was deferring the work of understanding what they actually needed.</p><p>Before reaching for a framework, be specific about what it buys you. If the answer is &#8216;quicker to prototype,&#8217; that is a legitimate reason. If the answer is &#8216;I have always used it,&#8217; run the experiment of not using it. You will find out quickly whether the complexity is earned.</p>]]></content:encoded></item><item><title><![CDATA[The Model Itself Is Not Your Competitive Advantage]]></title><description><![CDATA[GPT-4-level performance now costs 1/100th of what it did two years ago. Think about what that means]]></description><link>https://blog.manishsinha.me/p/the-model-itself-is-not-your-competitive</link><guid isPermaLink="false">https://blog.manishsinha.me/p/the-model-itself-is-not-your-competitive</guid><dc:creator><![CDATA[Manish Sinha]]></dc:creator><pubDate>Mon, 16 Mar 2026 20:05:09 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/467ef7c0-0d1a-4f08-890a-24c4c2115d6f_6000x4000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In 2023, having access to a frontier model felt like an advantage. OpenAI, Anthropic, and Google were the only places to get serious language model capability. You paid the rate, built on top of it, and the quality of the underlying model gave you an edge over competitors who had not figured that out yet.</p><p>That window has closed. GPT-4-level performance, which was the benchmark two years ago, now costs under a dollar per million tokens. DeepSeek R1 achieved reasoning performance competitive with OpenAI&#8217;s best model at a reported training cost of $5.6 million, compared to hundreds of millions for comparable US lab models. Open-source models from Alibaba, Mistral, and others have narrowed the gap to the frontier to the point where, for a growing set of tasks, they are competitive or better. The model itself is becoming a commodity.</p><p>This matters because a lot of companies built their AI strategy around which model they call. That is not a strategy. That is a vendor choice. When the vendor choices are increasingly interchangeable and the switching cost is near zero, the model you call says nothing about whether your product is defensible.</p><p>What actually differentiates outcomes now is further up and further down the stack. Proprietary data that you have accumulated and that a competitor cannot easily replicate is a real moat. Workflow depth, meaning how deeply your AI is embedded in a user&#8217;s actual work rather than sitting as a chat window beside it, is a real moat. The speed at which you can ship and iterate is a real moat, because a technical advantage measured in months is now measured in weeks. The model is infrastructure.</p><p>The companies that understood this early are designing model-agnostic architectures from day one. They are not betting the product on Claude staying ahead of GPT or vice versa. They are building the data layer, the integration layer, and the user experience in ways that compound over time regardless of which model is under the hood.</p><p>If your current AI strategy could be fully described as &#8216;we use GPT-5,&#8217; you have a vendor choice, not a strategy. The question worth spending time on is what your product would have that a competitor could not replicate in six months by calling the same API.</p>]]></content:encoded></item><item><title><![CDATA[Your Retrieval Pipeline Is Solving Yesterday's Problem]]></title><description><![CDATA[The RAG era is ending. Most teams haven't noticed yet]]></description><link>https://blog.manishsinha.me/p/your-retrieval-pipeline-is-solving</link><guid isPermaLink="false">https://blog.manishsinha.me/p/your-retrieval-pipeline-is-solving</guid><dc:creator><![CDATA[Manish Sinha]]></dc:creator><pubDate>Sun, 15 Mar 2026 18:01:48 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/115bc251-4d7f-4c43-85ae-0bbe491c9bc4_3280x4928.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Retrieval-augmented generation was the right answer to a specific problem: models had small context windows and no reliable way to access external knowledge at inference time. You chunked documents, embedded them, stored vectors, and retrieved the closest matches at query time. It worked. A lot of teams shipped it.</p><p>The problem it was solving is mostly gone. Claude Sonnet 4.6 and GPT-5.4 both ship with 1 million token context windows. Gemini 3.1 Pro had that capability months earlier. For a knowledge base of 10,000 documents or a codebase you want an agent to reason over, you can load the relevant material directly into context. No chunking strategy. No embedding model choice. No similarity threshold to tune. You give the model what it needs and let it work.</p><p>I am not saying vector databases are dead. For retrieval over millions of documents with strict latency requirements, they still make sense. But I have watched teams maintain Pinecone pipelines for internal knowledge bases with a few hundred pages of content. The infrastructure was serving a scale problem that did not exist.</p><p>What is replacing naive RAG is something more deliberate. The better teams are building what people now call context engines, systems that decide dynamically what the model needs before each step. Sometimes, that is retrieved documents. Sometimes it is a live API call. Sometimes it is cached history. The decision is made per query, not by a fixed pipeline that always retrieves regardless of whether retrieval helps.</p><p>The models are also better at knowing what they do not know. Ask a frontier model a question about something in its training data, and it answers. Ask it about something that requires live or private data, and it says so. The retrieval decision is no longer purely the engineer&#8217;s job. The model is a participant in it.</p><p>If your team built a RAG pipeline in 2024, it was probably the right call at the time. The question now is whether you are maintaining infrastructure for a problem the model can largely handle on its own. Run the experiment. Load your knowledge base into context and compare the output quality. For most mid-sized knowledge bases, you will find the retrieval layer is patching a limitation that no longer exists.</p>]]></content:encoded></item><item><title><![CDATA[Leverage Pre-trained Models Over Custom Training]]></title><description><![CDATA[Why building from scratch is a sunk cost trap]]></description><link>https://blog.manishsinha.me/p/leverage-pre-trained-models-over</link><guid isPermaLink="false">https://blog.manishsinha.me/p/leverage-pre-trained-models-over</guid><dc:creator><![CDATA[Manish Sinha]]></dc:creator><pubDate>Wed, 11 Feb 2026 05:03:24 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/8ec1d7d0-db34-4ef9-b13c-2735350461d5_2765x3456.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Training a model from scratch looks impressive on paper. Your own architecture. Your own optimization strategy. Complete control. Feels good.</p><p>It&#8217;s also usually a waste of time and money.</p><p>Pre-trained models - GPT variants, BERT, Vision Transformers etc represent billions of dollars and millions of hours of research already sunk. These models have learned general patterns across massive datasets. They&#8217;re proven. They work.</p><p>What they don&#8217;t do is understand your specific domain. That&#8217;s where fine-tuning comes in. You take a pre-trained model and retrain it on your proprietary data. Your domain. Your patterns. Your edge cases. This requires nowhere near the compute, time, or expertise that training from scratch demands.</p><p>The efficiency difference is staggering. A team building a text classification system from scratch might spend six months on architecture, training infrastructure, hyperparameter tuning, and validation. The same team starting with BERT fine-tunes in two weeks. That&#8217;s not a marginal difference. That&#8217;s the difference between shipping next quarter and shipping next month.</p><p>The cost follows the same curve. Training from scratch requires specialized infrastructure - GPUs, distributed training frameworks, constant monitoring. Fine-tuning runs on commodity hardware. Development costs drop by an order of magnitude.</p><p>But here&#8217;s what matters more: domain specificity. Your proprietary data is where the real value lives. Fine-tuning captures that value without the overhead of training from scratch. The model learns your terminology, your patterns, your edge cases. It becomes genuinely useful instead of generically mediocre.</p><p>This approach scales across domains. Legal firms fine-tune language models on contract databases. Healthcare providers fine-tune on clinical notes. E-commerce platforms fine-tune on customer behavior. In each case, the domain-specific layer is what creates competitive advantage. The base model is just infrastructure.</p><p>The efficiency frontier in 2025 isn&#8217;t about finding better models. It&#8217;s about better systems engineering around those models. Teams that treat AI as a long-term architectural investment&#8212;modular components, strong governance, incremental execution&#8212;outperform those chasing quick wins.</p><p>This means pragmatic tool selection. Don&#8217;t build custom when pre-trained does the job. Don&#8217;t optimize prematurely. Don&#8217;t treat the model as the system&#8212;the system is everything around it.</p><p>The winners aren&#8217;t the ones with the fanciest models. They&#8217;re the ones who move fastest, maintain control, and compound learning over time. That comes from building on what works instead of rebuilding what already exists.</p><p>Start with pre-trained. Build your advantage on top.</p>]]></content:encoded></item><item><title><![CDATA[Pilot your AI Application Rigorously, Scale Incrementally]]></title><description><![CDATA[How to prove AI works without destroying your business]]></description><link>https://blog.manishsinha.me/p/pilot-your-ai-application-rigorously</link><guid isPermaLink="false">https://blog.manishsinha.me/p/pilot-your-ai-application-rigorously</guid><dc:creator><![CDATA[Manish Sinha]]></dc:creator><pubDate>Mon, 09 Feb 2026 18:43:25 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6f3030c5-8a76-49ff-b7da-6e9c93a800de_4032x3024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The worst deployments follow the same pattern: teams skip pilots entirely and roll out across the entire system on day one. By day two, they&#8217;re dealing with cascading failures. By day three, they&#8217;re rolling back. The lesson learned is expensive and late.</p><p>Here&#8217;s what works instead: pick one non-critical component. A batch job that runs overnight. A reporting function. An internal tool. Something isolated enough that failure doesn&#8217;t spread. Run the AI system there for a month while keeping the old approach running in parallel. See what breaks. Fix it. Prove it works. Then expand.</p><p>This isn&#8217;t risk aversion. It&#8217;s how you actually validate assumptions before they become expensive.</p><p>The best pilots aren&#8217;t glamorous work. Batch jobs nobody notices if they&#8217;re slow. Reporting that takes two hours to generate&#8212;if the AI version cuts it to 30 minutes but makes occasional errors, you&#8217;ve learned something valuable. Internal tools used by a small team. Completely isolated services.</p><p>This sandbox is where you surface real problems. Integration gotchas that only appear under load. Performance assumptions that don&#8217;t hold. Edge cases nobody anticipated. Bias patterns that show up when you run at scale.</p><p>Running AI in parallel with existing systems reveals patterns quickly. Let the AI make recommendations while humans still decide. Compare outputs. Watch for misses, false positives, systemic biases. Catching these before full deployment is the difference between shipping confidently and shipping recklessly.</p><p>The organizational benefit matters too. When pilots succeed, stakeholders believe the next initiative might work. Non-technical people see results. Budget approval gets faster. You&#8217;ve built credibility&#8212;you&#8217;re not chasing technology, you&#8217;re shipping value.</p><p>Scaling before proving is how projects become disasters. Starting small, running long enough to see patterns, comparing against the old way&#8212;this takes longer upfront. It pays for itself a hundred times over when you avoid the catastrophe that happens when you skip it.</p><p>Boring implementation beats catastrophic deployment.</p>]]></content:encoded></item><item><title><![CDATA[Prioritize AI Data Infrastructure Over AI Model Sophistication]]></title><description><![CDATA[Why your fancy model is garbage if your data is garbage]]></description><link>https://blog.manishsinha.me/p/prioritize-ai-data-infrastructure</link><guid isPermaLink="false">https://blog.manishsinha.me/p/prioritize-ai-data-infrastructure</guid><dc:creator><![CDATA[Manish Sinha]]></dc:creator><pubDate>Thu, 05 Feb 2026 20:17:02 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d295fd94-69a9-4bb7-a4f3-15aa7fa41bf5_4928x3264.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A bunch of people I know spent eight months building an ML system that was technically beautiful. Custom loss functions. Ensemble methods. Hyperparameter tuning that would make a researcher weep. The model was state-of-the-art.</p><p>It was also predicting wrong on half their data because the data was corrupted. I was intruiged and decided to get familiar with some of the mistakes they made. I&#8217;ll use this knowlede sometime in the future.</p><p>They&#8217;d been feeding it timestamps from three different systems, some in UTC, some in local time, some just wrong. They had duplicates they didn&#8217;t know about. They had null values they&#8217;d filled with averages without understanding what the averages meant. The model wasn&#8217;t the problem. The data was.</p><p>This is the efficiency trap nobody talks about. Teams chase sophisticated models when they should be building pipelines.</p><p>Models only work as well as their input. You can have the most advanced architecture in the world. If it&#8217;s learning patterns from bad data, it&#8217;s learning noise. Fix the data and suddenly your simple model outperforms the complex one.</p><p>The infrastructure that matters is boring. Apache Kafka to stream data reliably. Snowflake or Databricks to manage it at scale. Clear data contracts that specify what fields mean, what format they&#8217;re in, when they arrived. Version control on transformations so you know what changed when. Validation gates that catch bad data before it reaches your model.</p><p>This stuff isn&#8217;t exciting. It doesn&#8217;t get talks at conferences as much as state of the art models. But it pays dividends immediately because most AI systems are starving for quality data. They&#8217;re eating whatever gets thrown at them.</p><p>There&#8217;s another payoff: the same infrastructure serves both analytics and AI. Traditional batch ETL architectures - data warehouses that update nightly&#8212;can&#8217;t support real-time decision-making. But a well-built data pipeline with Kafka and proper streaming can feed both your analytics dashboards and your AI systems, real-time, at the same time.</p><p>Via word of mouth, I heard about a fraud detection team that had terrible latency until they rebuilt their data pipeline. The model was fine. But it was working with data that was hours old. They moved to a streaming architecture. Latency dropped from 45 minutes to 30 seconds. That&#8217;s not tuning the model. That&#8217;s plain old infrastructure.</p><p>Here&#8217;s what matters: spend on data infrastructure first. Build pipelines that are robust, observable, and scalable. Then add your model on top. The opposite&#8212;fancy model, sketchy data - is how you end up debugging predictions you don&#8217;t understand.</p><p>Data infrastructure is where the real efficiency lives.</p>]]></content:encoded></item><item><title><![CDATA[Implement AI Governance Before Scale]]></title><description><![CDATA[Why early process design matters more than model selection]]></description><link>https://blog.manishsinha.me/p/implement-ai-governance-before-scale</link><guid isPermaLink="false">https://blog.manishsinha.me/p/implement-ai-governance-before-scale</guid><dc:creator><![CDATA[Manish Sinha]]></dc:creator><pubDate>Wed, 04 Feb 2026 00:11:03 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d7c73240-8624-4439-808f-a490241d1f4d_2667x3871.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Here&#8217;s what happens when you skip governance: a team builds an AI system that works beautifully in testing. They push it to production. It makes a decision that costs the company $200K. Nobody knows how it made that decision. Nobody can explain it to the board. Nobody can prevent it from happening again.</p><p>Then the lawyers get involved.</p><p>Think of two hypothetical ways it can play out. Once with a lending system that rejected applicants based on a pattern the model learned that nobody intended. Second, with a content moderation system that over-corrected and took down legitimate business accounts. Both times, the technology was sound. The governance was nonexistent.</p><p>Here&#8217;s the thing: AI amplifies what you already do. If you have strong code review practices, AI-generated code gets better because reviewers catch more issues. If you have weak ones, everything gets worse because now you&#8217;re generating code faster than you can possibly validate it.</p><p>Most teams discover this too late - when they&#8217;re already at scale.</p><p>The fix is unglamorous. You need code review processes that work for AI-generated code. Different focus areas than traditional reviews, but mandatory. You need version control for models and prompts, not just code. You need validation frameworks that specify how decisions get approved before they affect customers. You need clear escalation paths so a questionable prediction goes to a human before it becomes a problem.</p><p>But here&#8217;s what actually matters: human-in-the-loop validation isn&#8217;t optional for production systems. It&#8217;s architectural. You design it in from the start or you&#8217;re bolting it on desperately later.</p><p>A fintech team I worked with learned this the hard way. They launched a system without proper escalation paths. Six months in, they realized they had no idea which decisions had human review and which didn&#8217;t. Audit trail was a mess. They had to pause the system for a month to retrofit governance. Could have been avoided with two weeks of thinking at the beginning.</p><p>The regulatory angle pushes this further. Financial services, healthcare, anything regulated - auditors want to see decision trails. They want to understand why the system did what it did. If you built governance in, you have logs and traces. If you didn&#8217;t, you&#8217;re explaining why you can&#8217;t explain anything.</p><p>Version everything. Trace everything. Validate automatically where you can. Escalate to humans where you can&#8217;t. This isn&#8217;t bureaucracy. This is the infrastructure that lets you ship confidently instead of nervously.</p><p>Start with this foundation and scale becomes manageable. Skip it and scale becomes a liability.</p>]]></content:encoded></item><item><title><![CDATA[Treat AI as an Augmentation Layer, Not a Replacement]]></title><description><![CDATA[The case for wrapping, not ripping out]]></description><link>https://blog.manishsinha.me/p/treat-ai-as-an-augmentation-layer</link><guid isPermaLink="false">https://blog.manishsinha.me/p/treat-ai-as-an-augmentation-layer</guid><dc:creator><![CDATA[Manish Sinha]]></dc:creator><pubDate>Mon, 02 Feb 2026 05:01:50 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2c431e67-64e5-4c24-aa9a-bdc01d93ba9c_3088x2056.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Every CIO dreams of blowing up their legacy systems and starting fresh. Clean slate. Modern architecture. No baggage.</p><p>Then they get the cost estimate. It&#8217;s in the hundreds of millions. The timeline is three years, minimum. And somewhere around month six, they realize they&#8217;ve broken seventeen critical processes and there&#8217;s no going back.</p><p>This is why wrapping beats replacement every single time.</p><p>Your legacy system works. It&#8217;s been handling your business logic for fifteen years. It&#8217;s slow, it&#8217;s undocumented, it&#8217;s written in COBOL nobody understands anymore&#8212;but it works. Ripping it out to start over isn&#8217;t modernization. It&#8217;s a bet-the-company gamble disguised as engineering.</p><p>The smarter move is to leave it alone and build around it.</p><p>Create an API layer between your legacy system and the outside world. Add event streams that capture what&#8217;s happening. Then layer your AI on top&#8212;it sees the events, makes decisions, writes back through the APIs. The legacy system stays intact. The business logic never gets touched. You modernize without the risk.</p><p>This sounds simple. It is, relatively speaking. But most teams skip it because they&#8217;re seduced by the idea of starting fresh.</p><p>Generative AI changed the equation. Now you can actually understand what those legacy systems are doing. I worked with a financial services team stuck maintaining a 30-year-old system written in COBOL. They had three people who knew how it worked, all approaching retirement. Instead of rewriting, they fed the codebase to a language model. It extracted business rules, mapped dependencies, translated chunks to Python. Not magic- they still had to validate everything. But they went from &#8220;this is unmaintainable&#8221; to &#8220;we can actually work with this&#8221; in weeks.</p><p>McKinsey ran the numbers on this approach. Teams using gen AI for legacy refactoring hit 70% better productivity than traditional methods. That&#8217;s not marginal improvement. That&#8217;s transformative.</p><p>The payoff is triple. You reduce risk- no massive rewrite means no massive failure risk. You keep the business logic intact - nothing gets lost in translation. And you modernize incrementally - you wrap one piece, prove the pattern works, wrap another.</p><p>A year in, you&#8217;ve added AI capabilities without touching the legacy system. Two years in, you&#8217;ve reduced its load because you&#8217;re handling new work in the modern layer. Eventually it becomes less critical, not because you murdered it, but because you built better alternatives around it.</p><p>That&#8217;s how you actually retire legacy systems. Not with a bang. With patience and APIs.</p>]]></content:encoded></item><item><title><![CDATA[Adopt Modular, API-First Architecture for your AI Application]]></title><description><![CDATA[Why monolithic AI systems are a debt trap]]></description><link>https://blog.manishsinha.me/p/adopt-modular-api-first-architecture</link><guid isPermaLink="false">https://blog.manishsinha.me/p/adopt-modular-api-first-architecture</guid><dc:creator><![CDATA[Manish Sinha]]></dc:creator><pubDate>Thu, 29 Jan 2026 22:00:57 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/3b1744c3-88f9-4798-9417-9b1cf05552e8_5120x2880.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Building a monolith feels efficient at first. You thread everything together&#8212;data pipeline feeds the model, model outputs feed your dashboard. One system. One deployment. One thing to maintain.</p><p>Then your model needs updating. You redeploy. Everything goes down. Or you want to scale the inference layer but the data pipeline can&#8217;t handle it. You&#8217;re stuck because everything&#8217;s coupled.</p><p>I&#8217;ve seen teams spend six months building sophisticated ML systems only to realize they can&#8217;t upgrade a dependency without breaking production. They can&#8217;t swap out a model that&#8217;s underperforming. They can&#8217;t test changes without risking the whole thing. That&#8217;s the cost of pretending modularity is optional.</p><p>Efficient AI systems are built differently. Data ingestion, model serving, and orchestration operate as independent pieces. You change one without touching the others. That&#8217;s not just cleaner engineering - it&#8217;s the difference between iterating in weeks and iterating in months.</p><p>The practical payoff shows up fast. A team I worked with had a fraud detection system that could only update their model quarterly because redeployment was risky. We split the architecture: the model ran in its own service behind an API. The data pipeline fed it independently. Suddenly they were swapping models weekly, testing approaches in parallel, scaling the prediction layer without touching anything else.</p><p>Cloud-native principles - containers, Kubernetes, managed services&#8212;aren&#8217;t buzzwords if you actually need to run at scale. They let you define infrastructure as code, spin up environments consistently, and avoid the &#8220;it works on my machine&#8221; nightmare. More importantly, they force you to think in modules because distributed systems require it.</p><p>For legacy environments, API-driven patterns are lifelines. You don&#8217;t rip out old systems. You wrap them. Build an API layer between your legacy database and your new AI components. They talk through well-defined contracts. The legacy system doesn&#8217;t care that you&#8217;ve upgraded your model. Your model doesn&#8217;t care about the creaky old business logic underneath.</p><p>This approach costs more upfront. You&#8217;re thinking about boundaries, contracts, failure modes. You&#8217;re not just stitching things together. But that cost compounds into savings - you iterate faster, fail safer, scale without panicking.</p><p>The teams that move fastest aren&#8217;t the ones who build the fanciest models. They&#8217;re the ones who can change components without everything collapsing. Start with modularity. You&#8217;ll thank yourself in six months.</p>]]></content:encoded></item><item><title><![CDATA[Start with AI Business Alignment, Not AI Technology]]></title><description><![CDATA[I&#8217;ve watched a lot of AI projects fail.]]></description><link>https://blog.manishsinha.me/p/start-with-business-alignment-not</link><guid isPermaLink="false">https://blog.manishsinha.me/p/start-with-business-alignment-not</guid><dc:creator><![CDATA[Manish Sinha]]></dc:creator><pubDate>Wed, 28 Jan 2026 02:49:57 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5cad5fc1-9209-4b74-8c7e-6038581bff22_5068x3379.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I&#8217;ve watched a lot of AI projects fail. Most of them failed in the first week&#8212;before a single line of code got written.</p><p>The pattern&#8217;s always the same. Someone gets excited about a new model and decides <em>this is the one</em>. The engineering team builds something sophisticated. Six months and $500K later, they&#8217;re explaining to executives why there are no measurable returns.</p><p>The turning point isn&#8217;t better algorithms. It&#8217;s asking a simple question first: what problem are we actually trying to solve?</p><p>Teams usually get this backwards. They pick the flashiest technology. They build systems designed for scale before proving the concept works. Then they wonder why they&#8217;re burning budget on infrastructure nobody uses.</p><p>The successful ones start different. A financial services company didn&#8217;t ask &#8220;how do we implement cutting-edge ML?&#8221; They asked: &#8220;where are developers wasting time?&#8221; Answer: debugging stack traces in legacy systems. One engineer spent 4 hours weekly on it. They built an AI tool for that one thing. $50K cost. 200 hours saved annually. Problem solved.</p><p>Another team wanted to &#8220;leverage AI&#8221; broadly. Vague and expensive. We dug into actual pain points and found their testing was completely manual. Test case generation became the focus. Unsexy work, but they shipped 30% faster with measurable returns in weeks.</p><p>Most AI problems aren&#8217;t AI problems&#8212;they&#8217;re execution problems. You&#8217;ve got bottlenecks: expensive manual work, slow turnaround, incomplete data. AI addresses some of that. But only if you identify them clearly before building.</p><p>Stack trace analysis. Code refactoring. Test generation. Domain automation. These aren&#8217;t glamorous. They don&#8217;t get conference talks. But they move the needle because they solve specific, measurable problems.</p><p>When you know exactly what you&#8217;re building and why, you stop overengineering. You don&#8217;t build for scale when you need speed. You don&#8217;t implement multi-model orchestration when an API wrapper works. You don&#8217;t burn three months debating architecture when you could ship in two weeks.</p><p>This also builds organizational credibility. Ship something that works, show the returns, and stakeholders trust the next project. Build something cool but pointless, and you&#8217;ve spent political capital you won&#8217;t recover.</p><p>The critical efficiency gain happens at the beginning. Before tool selection, before architecture, before hiring. Define the business problem. Make it specific. Quantify success.</p><p>Then pick the technology that solves it.</p><p>Start there and you&#8217;re already ahead of most teams.</p>]]></content:encoded></item><item><title><![CDATA[Don’t ask to ask. Just ask.]]></title><description><![CDATA[People aren&#8217;t opposed to the idea of helping, they are opposed to the idea of having their time wasted]]></description><link>https://blog.manishsinha.me/p/dont-ask-to-ask-just-ask</link><guid isPermaLink="false">https://blog.manishsinha.me/p/dont-ask-to-ask-just-ask</guid><dc:creator><![CDATA[Manish Sinha]]></dc:creator><pubDate>Wed, 25 Sep 2024 03:52:55 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/53f75366-bd82-46c8-945f-264d1f649da8_2000x3000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Imagine a conversation</p><blockquote><p>Person 1: Hi</p><p>Person 2: Yes, tell me</p><p>Person 1: I have a question </p><p>Person 2: sure, go ahead</p><p>Person 1: I was thinking about contributing to this project&#8230;&#8230;</p></blockquote><p>By the time Person 1 got to ask their question, multiple hours had elapsed, and Person 2 had to break their attention numerous times each just to try to move the conversation ahead.</p><p>What do you think you should do? I'd like you to please get to the point. Don&#8217;t ask to ask. It&#8217;s not a movie where you must set up the suspense and focus on character development. Just ask the question.</p><blockquote><p>Person 1: hey, I wanted some help from you. I wanted to contribute to this project. Can you introduce me to some critical people and send me some reading material to understand the background. I would really appreciate your help.</p></blockquote><p>Depending upon how the message is drafted, it can come off as crass since you didn&#8217;t develop the foundation for asking your question. The question is whether they consider it rude.</p><p>An individual regularly approached by people for help will value their time significantly. To ensure our actions are aligned with theirs, it&#8217;s the best course of action to &#8220;Just ask the question.&#8221;</p>]]></content:encoded></item><item><title><![CDATA[Don't bury the lede when answering a behavioral question]]></title><description><![CDATA[Use the STAR format and quickly get to the point]]></description><link>https://blog.manishsinha.me/p/dont-bury-the-lede-when-answering</link><guid isPermaLink="false">https://blog.manishsinha.me/p/dont-bury-the-lede-when-answering</guid><dc:creator><![CDATA[Manish Sinha]]></dc:creator><pubDate>Thu, 12 Sep 2024 17:27:08 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5646d42c-ab9a-4e46-8f9a-c10bdf1cc6ea_5030x3353.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You need to understand that time is working against you during a behavior interview. It is critical to get the interviewer to be hooked on to your answer. The worst mistake you can make is to ramble for a minute or two before getting to the point. This is the age of limited attention span and you need to adapt your strategy accordingly.</p><p>Different interviewers, have different attention spans, but you can't rely on the interviewer whom you have never met before to be the one with a longer attention span. It's safer to assume that everyone you meet have a short attention span and I can completely understand if that is truly the case.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.manishsinha.me/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Manish Sinha! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>Zip straight to the Situation</h3><p>It&#8217;s a well known secret that all behavior questions should be answered in the Situation-Task-Action-Result (STAR) format. The immediate goal is to craft a strong hook so that the interviewer is invested in your answer. That hook is the mystery, or in our case, the situation. Remove all fluff when you are explaining the situation, and leave it for the time when the interviewer asks you a follow up question.</p><p>In fact, this strategy can be very effective as the background and specifics that you didn't provide might end up being the next follow up question. Take care that you provide just enough introduction and background, else the interviewer might lose interest.</p><p>Imagine a question &#8220;<strong>Tell me the most challenging project that you worked on</strong>&#8221;.</p><p>Even thought it might not look like a question for STAR format, </p><p>For this question, you can start by</p><blockquote><p>&#8220;<strong>In my current company Apple, I was working on Apple Health iOS app, where we were expected to integrate the app with Hospitals, Networks, Labs online records so that users can have all their health data from all providers in one place</strong>&#8221;</p></blockquote><p>We started with (a) company name (b) product name (c) the project or situation and then followed by (d) why this is important.</p><p>Explaining the impact of the project is important since it showcases why you are working on it. It showcases you understand the objective rather than just doing what you are told to do.</p><h3>Move quickly to your actions</h3><p>Do not spend too much time on the Situation. It just needs to be a quick opener, setting the stage for your answer. </p><p>The Task section is a quick overview of what had to be achieved. e.g.</p><blockquote><p>&#8220;<strong>I had to work with public relations team to coordinate relate to the providers that would be supported and I had to coordinate with the Data Lake team to find ways to actually fetch the data</strong>&#8221;</p></blockquote><p>The Task section can also contain why it was a challenging project in the first place.</p><p>The Action would be something along the lines of</p><blockquote><p>&#8220;<strong>I reached out to the TPM on public relations team with a quick outline of the problem. I setup a meeting with her and 2 days before the meeting sent her the agenda of the meeting including objective of my project and the possible challenges I anticipate. In parallel, I reached out to the SDM on Data Lake team about the capabilities I am looking for. I reached out to the Security team about how to secure health data locally on the iPhone</strong>&#8221;</p></blockquote><h3>Wrap it up quick</h3><p>Make the results sound appealing by getting to the actual numbers quickly. Some  phrases that convey such numbers are</p><ul><li><p>&#8220;We completed the project in 4 months, which was a month ahead of time&#8221;</p></li><li><p>&#8220;We reduced the latency from X &#8594; Y (Z% decrease) in the 4months timeperiod&#8221;</p></li><li><p>&#8220;We released the product and we saw 7% month-over-month growth of daily active users&#8221;</p></li><li><p>&#8220;We saw 7% decrease of storage usage by switching to xz compression, that translates to $900K of monthly cost savings which is 12% of the storage costs&#8221;</p></li></ul><p>Attaching the $ value to the answers is an impactful method. Make sure the numbers make sense and it does not sound like you are making it up, even if the cost savings was real. It&#8217;s very hard to change the perception if the interviewer concluded that you cooked up a number to impress them.</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.manishsinha.me/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Manish Sinha! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[You are not your resume's audience, so stop designing it for yourself]]></title><description><![CDATA[Think how a recruiter would scan your resume]]></description><link>https://blog.manishsinha.me/p/you-are-not-your-resumes-audience</link><guid isPermaLink="false">https://blog.manishsinha.me/p/you-are-not-your-resumes-audience</guid><dc:creator><![CDATA[Manish Sinha]]></dc:creator><pubDate>Thu, 22 Aug 2024 20:55:12 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!lwFJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5f525fd-7a46-43ef-a502-7818d0989305_590x588.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Working on your resume is as much an art as it is a science. We don&#8217;t exactly know what works best since everyone has conflicting opinions. If you ask 10 people about the best resume design, you will get 10 different answers. If you ask people across industries then the answers are even more varied. </p><p>First mistake that we make it tailoring the resume to make it look good and fit our aesthetic sensibilities. You are not your audience of your resume. It doesn&#8217;t matter what opinion you have of your resume. What does a recruiter think about your resume? Are they impressed? Did they lose interest?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.manishsinha.me/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Manish Sinha! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The actual question should be &#8220;How does a recruiter scan your resume?&#8221;. Given that they have hundreds of resumes to scan through, why does your resume stand out? Are they looking for a list of technical skills or your work experience? Which one would you present first? Is Education section more important than Experience?</p><p>While this is an old study, in <a href="https://www.businessinsider.com/heres-what-recruiters-look-at-during-the-6-seconds-they-spend-on-your-resume-2012-4">2012 BusinessInsider reported</a> that recruiters look at your resume for just 6 seconds. That&#8217;s not a lot of time, which means they are skimming it. If the resume is not eye-catching, the details are irrelevant. The layout matters more than contents if you want to cross the first hurdle - getting shortlisted.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!lwFJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5f525fd-7a46-43ef-a502-7818d0989305_590x588.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!lwFJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5f525fd-7a46-43ef-a502-7818d0989305_590x588.jpeg 424w, https://substackcdn.com/image/fetch/$s_!lwFJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5f525fd-7a46-43ef-a502-7818d0989305_590x588.jpeg 848w, https://substackcdn.com/image/fetch/$s_!lwFJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5f525fd-7a46-43ef-a502-7818d0989305_590x588.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!lwFJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5f525fd-7a46-43ef-a502-7818d0989305_590x588.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!lwFJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5f525fd-7a46-43ef-a502-7818d0989305_590x588.jpeg" width="728" height="725.5322033898306" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e5f525fd-7a46-43ef-a502-7818d0989305_590x588.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:588,&quot;width&quot;:590,&quot;resizeWidth&quot;:728,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;recruiters resume&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="recruiters resume" title="recruiters resume" srcset="https://substackcdn.com/image/fetch/$s_!lwFJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5f525fd-7a46-43ef-a502-7818d0989305_590x588.jpeg 424w, https://substackcdn.com/image/fetch/$s_!lwFJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5f525fd-7a46-43ef-a502-7818d0989305_590x588.jpeg 848w, https://substackcdn.com/image/fetch/$s_!lwFJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5f525fd-7a46-43ef-a502-7818d0989305_590x588.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!lwFJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5f525fd-7a46-43ef-a502-7818d0989305_590x588.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Credit: TheLadders and BusinessInsider</figcaption></figure></div><p>The one on the right has clearly marked section, providing guiding lines for the recruiter to jump where they wish to. </p><p>While there are a millions of solutions to fix your resume, there is no &#8220;Correct&#8221; format. Just guidelines to improve your odds</p><ol><li><p>Overview, Experience, Skills, Projects, Education etc should have their own section, with clear seperation of sections</p></li><li><p>The most recent experience matters.</p></li><li><p>Draw attention to the Company Name and Title by bolding them. Don&#8217;t bold duration, location and description. When everything is bold, nothing is bold.</p></li><li><p>If you are in tech, writing details of each job experience, broadly follow these steps</p><ol><li><p>Start the sentence with an action. e.g. &#8220;Improved&#8221;, &#8220;Reduced&#8221;, &#8220;Saved Cost&#8221; etc</p></li><li><p>Use numbers to justify your achievements. They should be believable. </p></li></ol></li><li><p>For Education, just bold the name of the University. If the recruiter is interested, then they can read more. </p></li><li><p>Skillset - Just have two lines. First line is a comma seperated list of technical skill and second line for non-technical skills. These are probably for ATS, not for humans.</p></li><li><p>Recruiters are likely to scan your resume from top to bottom looking at just the leftmost word. If leftmost word is catchy, they might read the rest. That&#8217;s why we start the sentence with action. That&#8217;s why the leftmost side should have name of the comany.</p></li></ol><p>Let&#8217;s have a look at how an experience section could look like</p><div><hr></div><p><strong>CompanyName</strong>                                                                         Location, State</p><p><strong>Title/Position</strong>                                                                    From - To (Duration)</p><ol><li><p>Improved user satisfaction tracked using NPS from X to Y (17% improvement), by taking specific actions</p></li><li><p>Reduced request latency from X to Y (8% reduction), by investigating bottlenecks and optimizing critical code paths</p></li><li><p>Saved cost of $400K by analyzing log access patterns and updating the archival policy.</p></li></ol><div><hr></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.manishsinha.me/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Manish Sinha! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[You might not be a security engineer, but you can think like one]]></title><description><![CDATA[Security is a process]]></description><link>https://blog.manishsinha.me/p/you-might-not-be-a-security-engineer</link><guid isPermaLink="false">https://blog.manishsinha.me/p/you-might-not-be-a-security-engineer</guid><dc:creator><![CDATA[Manish Sinha]]></dc:creator><pubDate>Mon, 19 Aug 2024 17:01:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/v8Ry1C8AnXk" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We hear about security breaches every day, and they seem to be growing at an enormous rate. The tech industry needs to be transformed, and a security-oriented mindset must be possessed. Security professionals are highly valued for their crucial role, but the belief that security is solely their responsibility is outdated and potentially risky.</p><p>As Bruce Schneier, a renowned security technologist, states, "<a href="https://www.amazon.com/Secrets-Lies-Digital-Security-Networked/dp/1119092434">Security is not a product but a process</a>."Every individual within an organization, from developers to managers, is involved in embracing security as an integral part of daily operations.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.manishsinha.me/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Manish Sinha! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>The impact is significant</h2><p>I cannot stress enough the importance of this collective approach to security. The facts speak for themselves: <a href="https://www.ibm.com/reports/data-breach">according to IBM</a>, the average cost of a data breach in 2021 was a staggering $4.24 million, with the healthcare industry facing an even more alarming average cost of $9.23 million per incident. These aren't just numbers &#8211; they represent a significant financial impact and a potential erosion of customer trust, not to mention the harm inflicted on victims.</p><p>I've seen the <a href="https://www.csoonline.com/article/534628/the-biggest-data-breaches-of-the-21st-century.html">consequences of major cybersecurity incidents</a>, and they're sobering. Take the 2017 Equifax breach, which exposed the sensitive information of 147 million people. Equifax and FTC <a href="https://www.ftc.gov/news-events/news/press-releases/2019/07/equifax-pay-575-million-part-settlement-ftc-cfpb-states-related-2017-data-breach">reached an agreement</a> where Equifax would pay between $575M - $700M. This still fails to undo the damage to the victim, who might have to worry about their stolen identity for the rest of their lives.</p><p>The 2020 SolarWinds supply chain attack compromised numerous government agencies and corporations. The SolarWinds breach was so impactful that the Government Accountability Office (GAO) published &#8220;<a href="https://www.gao.gov/blog/solarwinds-cyberattack-demands-significant-federal-and-private-sector-response-infographic">SolarWinds Cyberattack Demands Significant Federal and Private-Sector Response (Infographic)</a>.&#8221; </p><p>Let me be perfectly clear: adopting a security mindset goes far beyond implementing best practices in code. It encompasses operational security measures and demands a heightened awareness of potential threats. Consider the 2011 RSA breach, which compromised SecurID two-factor authentication tokens. This wasn't a result of sophisticated hacking&#8212;it was initiated through a <a href="https://www.theregister.com/2011/08/26/rsa_attack_email_found/">simple phishing email</a>. This incident highlights the importance of staying alert against social engineering attacks, which manipulate human psychology instead of exploiting technical vulnerabilities.</p><div id="youtube2-v8Ry1C8AnXk" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;v8Ry1C8AnXk&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/v8Ry1C8AnXk?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h2>We need to change our thought process</h2><p>I'm convinced that to think like a security engineer, we must cultivate a mindset of constant vigilance and healthy skepticism. This means questioning assumptions, anticipating potential vulnerabilities, and considering the security implications of every decision we make.</p><p>Ross Anderson, in his seminal work "<a href="https://www.amazon.com/Security-Engineering-Building-Dependable-Distributed/dp/1119642787">Security Engineering</a>" rightly points out that security is often compromised not by breaking specific mechanisms but by exploiting oversight and complacency. I firmly believe that by fostering a culture where every team member actively contributes to security efforts, we can create a more robust defense against cyber threats.</p><p>To develop this mindset, it's important to stay informed about current threats, participate in security training programs, and integrate security considerations into every stage of the software development lifecycle. I'm particularly excited about "shift-left security," which promotes incorporating security practices earlier in the development process. The results speak for themselves: <a href="https://www.splunk.com/en_us/form/2019-state-of-devops-report.html">a study by Puppet Labs</a> found that organizations implementing DevSecOps practices spend 50% less time remediating security issues. That's not just efficient &#8211; it's smart security.</p><h2>Wrapping up</h2><p>In conclusion, I want to emphasize this point: while not everyone needs to become a security engineer, adopting their mindset is absolutely crucial in today's threat landscape.</p><p>By internalizing the fact that security is everyone's responsibility, remaining vigilant against various forms of attacks, and integrating security considerations into all aspects of our work, we can significantly enhance our resilience against cyber threats.</p><p>As the cybersecurity landscape continues to evolve, I'm convinced that this collective approach to security will become increasingly vital in protecting sensitive data and maintaining trust in our digital ecosystems. It's time for all of us to step up and think like security engineers &#8211; our digital future depends on it.</p><p></p>]]></content:encoded></item><item><title><![CDATA[Is 100% Code Coverage the Best Use of Your Time?]]></title><description><![CDATA[How can that time be re-allocated?]]></description><link>https://blog.manishsinha.me/p/is-100-code-coverage-the-best-use</link><guid isPermaLink="false">https://blog.manishsinha.me/p/is-100-code-coverage-the-best-use</guid><dc:creator><![CDATA[Manish Sinha]]></dc:creator><pubDate>Wed, 14 Aug 2024 17:01:31 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/cc9cde70-8fdf-4f54-996f-f394ace05e15_1536x1536.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In software development, achieving 100% code coverage is often seen as crucial. However, it may not always be the best use of an engineering team's limited resources. While code coverage is valuable, it's important to recognize its limitations and explore other approaches that could lead to better outcomes.</p><p>Code coverage measures the percentage of code that is executed during testing. However, it's important to note that simply running each line of code once doesn't ensure that all potential issues will be detected. We also need to test different combinations of values. It's possible that someone might have 15 tests for one method and zero tests for the next 5 methods, which might seem surprising, but not all code paths are equally important.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.manishsinha.me/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.manishsinha.me/subscribe?"><span>Subscribe now</span></a></p><p>Martin Fowler, a renowned software developer, <a href="https://martinfowler.com/bliki/TestCoverage.html">aptly puts it</a>:</p><blockquote><p>"Test coverage is a useful tool for finding untested parts of a codebase. Test coverage is of little use as a numeric statement of how good your tests are."</p></blockquote><p>I strongly believe in focusing on the quality and effectiveness of tests, rather than just the quantity. Instead of aiming for 100% coverage, I am confident that investing time in designing comprehensive test cases that cover critical paths and edge cases is more beneficial.</p><div><hr></div><p>I believe that striving for complete code coverage can often result in diminishing returns. The effort required to increase coverage from 95% to 100% can be disproportionately high, leading to a significant rise in the number of test cases. This expansion can make the test suite more fragile and challenging to maintain over time.</p><p>As noted by Steve McConnell in his book "<a href="https://www.amazon.com/Code-Complete-Practical-Handbook-Construction/dp/0735619670">Code Complete</a>":</p><blockquote><p>"Trying to achieve 100% coverage often results in a significant amount of additional testing effort for minimal gain."</p></blockquote><p>Moreover, I must always consider the long-term maintenance cost of a fully covered codebase. As our software evolves, keeping all tests up-to-date can become a time-consuming task that may slow down our development and hinder our ability to ship new features quickly. This brings me to a crucial question: What do our customers truly care about? In most cases, they prioritize a stable, feature-rich product delivered in a timely manner over a theoretical measure of code quality.</p><div><hr></div><p>In my opinion, while code coverage is certainly helpful, it must not overshadow other vital aspects of software development. I advocate for a well-rounded approach that incorporates strategic test coverage alongside other quality assurance methods.</p><p>As suggested by Glenford Myers in "<a href="https://www.amazon.com/Art-Software-Testing-Glenford-Myers/dp/1118031962">The Art of Software Testing</a>":</p><blockquote><p>"The objective of testing is to find errors. Therefore, a good test case is one that has a high probability of detecting an as-yet undiscovered error."</p></blockquote><p>I believe that by focusing on creating effective tests that address potential problem areas and essential functionalities, my team and I can optimize our time and resources, ultimately providing more value to our customers.</p>]]></content:encoded></item><item><title><![CDATA[Customer Oriented mindset vs Engineering Oriented mindset]]></title><description><![CDATA[You don't necessarily have to choose]]></description><link>https://blog.manishsinha.me/p/customer-oriented-mindset-vs-engineering</link><guid isPermaLink="false">https://blog.manishsinha.me/p/customer-oriented-mindset-vs-engineering</guid><dc:creator><![CDATA[Manish Sinha]]></dc:creator><pubDate>Mon, 12 Aug 2024 17:01:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/47c0ab19-8f3e-4478-89aa-b02e5f96fac2_1536x1536.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In my extensive software development experience, I've identified two primary mindsets: Customer-Oriented and Engineering-Oriented. The Customer-Oriented mindset places users at the heart of decision-making, while the Engineering-Oriented mindset prioritizes technical excellence and best practices. Both approaches aim to create high-quality software, but their focuses differ significantly.</p><p>Many engineers tend to adopt an engineering-oriented mindset, concentrating on delivering technically sound solutions. They take pride in writing elegant code, optimizing performance, and maintaining clean codebases. While this approach often leads to robust and efficient software, it can sometimes create a gap between the product and its users. As Robert C. Martin pointed out in his book "<a href="https://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship/dp/0132350882">Clean Code: A Handbook of Agile Software Craftsmanship" (2008)</a>,</p><blockquote><p>"The only valid measurement of code quality is WTFs/minute"</p></blockquote><p>This quote emphasizes that even technically perfect code is of little value if it doesn't effectively meet user needs.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.manishsinha.me/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.manishsinha.me/subscribe?"><span>Subscribe now</span></a></p><p>I have discovered that adopting a customer-oriented mindset shifts the focus to user satisfaction and business value. Engineers with this approach actively seek user feedback and are willing to make technical compromises for a more user-friendly product. This aligns closely with Tim Brown's Design Thinking principles. In his Harvard Business Review article <a href="https://readings.design/PDF/Tim%20Brown,%20Design%20Thinking.pdf">"Design Thinking" (2008) (p86)</a>, Brown states,</p><blockquote><p>"Innovation is powered by a thorough understanding, through direct observation, of what people want and need in their lives and what they like or dislike about the way particular products are made, packaged, marketed, sold, and supported".</p></blockquote><p>The focus on deeply understanding user needs aligns perfectly with a customer-oriented approach, which should be the preferred mindset. In today's competitive software market, user experience and customer satisfaction are crucial differentiators. As Peter Drucker famously said in his book&nbsp;<a href="https://www.amazon.com/Practice-Management-Peter-F-Drucker/dp/B000MTCYLM">The Practice of Management (1954)</a>,</p><blockquote><p>"The purpose of a business is to create and keep a customer"</p></blockquote><p>I believe this principle applies equally to software development: a technically perfect product that fails to meet user needs is ultimately a failure. Throughout our careers, we have all experienced using external software for payroll or travel booking. We may have considered the user interface terrible and hoped for startups to disrupt the market. Unfortunately, that's not always the case because individual needs often differ from the needs of the customers of the product, in this case, your company. Those external software vendors are focused on their customers - your employer - not you.</p><p>I see these mindsets as complementary rather than mutually exclusive. In my view, the optimal approach is to strike a balance, with a preference for the customer-oriented end. This balanced approach is effectively demonstrated in the Agile methodology, which places equal emphasis on technical excellence and customer collaboration. The first principle of the <a href="https://agilemanifesto.org/">Agile Manifesto (2001)</a> states,</p><blockquote><p>"Our highest priority is to satisfy the customer through early and continuous delivery of valuable software."</p></blockquote><p>By wholeheartedly embracing a customer-centric mindset and upholding high engineering standards, software teams can confidently create products that are both technically sound and highly valued by users.</p><p></p>]]></content:encoded></item><item><title><![CDATA[Software Engineers should be good writers]]></title><description><![CDATA[Expressing your work is crucial in being effective]]></description><link>https://blog.manishsinha.me/p/software-engineers-should-be-good</link><guid isPermaLink="false">https://blog.manishsinha.me/p/software-engineers-should-be-good</guid><dc:creator><![CDATA[Manish Sinha]]></dc:creator><pubDate>Sun, 11 Aug 2024 04:09:23 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2d2522c2-d39b-4f1b-966e-e62e8770316c_893x1360.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>At the start of my software engineering career, I focused more on writing good code, following best practices, learning new programming languages, and engaging in stereotypical new graduate activities.</p><p>Over the years, I have become much more seasoned in the industry. This includes changing my mindset and being more flexible in my thought process. I have entertained ideas that would be considered offputting right out of college. Software Engineers should focus on writing code. Right?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.manishsinha.me/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Manish Sinha! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Software Engineers should be good writers. They should be great writers. If a tree falls in the forest and no one hears it fall, did it even fall? If a software engineer delivered a project and no one knew about it, did they even deliver it? The comparison is intentionally outrageous to draw attention to the point. You need to express yourself and talk about your achievements.</p><div><hr></div><p>None of us should expect others to &#8220;just know&#8221; about our achievements. Do you know everyone&#8217;s achievements? Even if they somehow know about your achievements, do they have a proper picture, or is it a misrepresentation? When you expect others to know about your work, you are handing over the power of narrative. Seize it! You should control the narrative of your accomplishments.</p><p>If you know how to write well, you will be much more effective in communicating your ideas. Writing is more than just a bunch of words strung together. It&#8217;s a story. It should be compelling and hook the reader. It should be precise and express what you mean.</p><div><hr></div><p>I strongly suggest reading&nbsp;<a href="https://www.amazon.com/gp/product/0060891548/">William Zinsser&#8217;s book &#8220;On Writing Well&#8221;</a>&nbsp;about the importance of writing well. It is a game-changer. When I compare my writing style today to two years ago, I can see a stark difference. I get to the point quickly. I strive to start with the punchline. I aim to embrace the letter and spirit of the book in the coming future.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.manishsinha.me/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Manish Sinha! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Tech Debt isn't necessarily bad]]></title><description><![CDATA[We should use some nuance]]></description><link>https://blog.manishsinha.me/p/tech-debt-isnt-necessarily-bad</link><guid isPermaLink="false">https://blog.manishsinha.me/p/tech-debt-isnt-necessarily-bad</guid><dc:creator><![CDATA[Manish Sinha]]></dc:creator><pubDate>Sat, 10 Dec 2022 19:04:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!83np!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2aaef37-e30f-4585-b290-3263a6a54293_640x356.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It's the end of 2022, and I am confident that enough has been written about tech debt. It&#8217;s a dead horse that has been beaten to death a thousand times. One would assume that the matter is settled and we will be moving on to talk about more pressing matters, which are: I am not sure, to be honest. We will get back to it in a while (or maybe not, as pressing matters aren&#8217;t what this post is about).</p><p>When asked the question &#8220;How can we reduce tech debt?&#8221; it more than often turns into a discussion of priorities. In a lot of those moments, the discussion is valid, but just like a significant chunk of the discussions, we are just bikeshedding. Instead, focus on</p><ol><li><p>What is your definition of tech debt? Do people in this room (or the zoom call) mostly agree with your definition?</p></li><li><p>Why do you want to reduce tech debt?</p></li><li><p>Is this thing actually tech debt?</p></li></ol><p>Is debt bad? Can tech debt be compared to traditional debt or mortgages? Do you necessarily have to pay it back? From whom have you borrowed the debt?</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.manishsinha.me/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.manishsinha.me/subscribe?"><span>Subscribe now</span></a></p><p>If you have been an intermediate experienced software engineer and approached the management or leadership to present your &#8220;One True Plan to slash Tech Debt by Half in three financial quarters&#8482;&#8221; it&#8217;s very likely you walked back with a sour taste in your mouth, the taste rivaling spoilt milk. <s>Reading </s>Listening between the lines, you can hear them screaming &#8220;Ummm, no, I am trying to tell you NO without sounding rude&#8221;. The management is either hostile because they don&#8217;t understand it or much more likely to have heard this proposition a millionth time. My money is on the latter.</p><p>Just ask Netscape when they tried re-writing... you hear that right. <a href="https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/">Rewriting Netscape from scratch</a> while Microsoft ate its lunch with Internet Explorer (which resulted in another tragedy for the internet). Big-gigantic-huge changes without any short-term gains are extremely risky. We consistently make similar mistakes and need people to review our proposals and whip us back into shape. I personally prefer presenting my proposals of this nature to people who possess no idea of the product so that they can ask fundamental questions which I might have ignored as it&#8217;s quite easy to put your blinders on.</p><h2>Why do you want to reduce technical debt?</h2><p>Bear with me cowboy! I know you want to bring home the sheep and celebrate with your buddies. Tell me the reason why you want to reduce technical debt? Is it because it offends your sensibilities that something is tied together by duct tape? Maybe the tape is indeed strong enough to hold that thing together till the sun devours our planet. Maybe it&#8217;s <a href="https://www.avweb.com/aviation-news/speed-tape-patches-prompt-viral-post/">an aviation-grade speed tape</a> that can hold jet engines together?</p><p>At this moment, I am not raining on your parade. <s>We, humans want a reason for doing something?</s> We humans ask &#8216;What&#8217;s in it for me?&#8217;. If you are pitching your tech debt reduction bill to your manager or shark tank, you need to explain &#8216;why&#8217;. You are a salesperson and the management is your client. Tell them the value proposition. </p><div class="pullquote"><p>Unless you are selling shitcoins, then ignore everything I just told you. At this point even I don&#8217;t know anymore. Crypto has become a wildcard.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.manishsinha.me/p/tech-debt-isnt-necessarily-bad?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.manishsinha.me/p/tech-debt-isnt-necessarily-bad?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h2>Does the customer care? How will it benefit the customer?</h2><p>You can go and ask the customer - Do you want a button which does this? They might understand what that button does. The customer might provide an enthusiastic response which is a &#8216;yes&#8217; or at worse make you run around because no one wants to spend time understandin if this customer feature will be useful to them.</p><p>Now go and ask the customer if they want to reduce the technical debt? The people in the conference room will be looking at each other trying to understand what it means. The finance people might wonder if this is a new kind of bond to raise funds.</p><p>At this point, its upto you to understand if this even benefits the customer. Try to keep asking &#8216;why&#8217; till you reach the actual question. Think of this conversation</p><blockquote><p>Question: Why do we need to reduce tech debt?</p><p>Answer: Our CorpseKeeper module has tight coupling with GothamBeautifier which makes it difficult to maintain DogeCrypt for multiple version and deployment takes over 12 hours?</p><p>Question: How does the tight coupling make it difficult to maintain multiple versions?</p><p>Answer: It makes the developer life difficult as we have to test every version extensively before moving ahead to release.</p><p>Question: How will quicker deployment time help us and the customers?</p><p>Answer: If we have to deliver a quick fix then 12 hours is too long and customer is kept waiting.</p><p>Question: How long should the deployment take? How often are these quick fixes? How much time would it take to bring down the deployment time? What can break?</p><p>Answer: &#8230;</p></blockquote><p>The conversation can keep going on until the benefits to the customer is determined. </p><h2>Will it ever bite you in the future?</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!83np!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2aaef37-e30f-4585-b290-3263a6a54293_640x356.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!83np!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2aaef37-e30f-4585-b290-3263a6a54293_640x356.png 424w, https://substackcdn.com/image/fetch/$s_!83np!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2aaef37-e30f-4585-b290-3263a6a54293_640x356.png 848w, https://substackcdn.com/image/fetch/$s_!83np!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2aaef37-e30f-4585-b290-3263a6a54293_640x356.png 1272w, https://substackcdn.com/image/fetch/$s_!83np!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2aaef37-e30f-4585-b290-3263a6a54293_640x356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!83np!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2aaef37-e30f-4585-b290-3263a6a54293_640x356.png" width="640" height="356" data-attrs="{&quot;src&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/e2aaef37-e30f-4585-b290-3263a6a54293_640x356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:356,&quot;width&quot;:640,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:18789,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!83np!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2aaef37-e30f-4585-b290-3263a6a54293_640x356.png 424w, https://substackcdn.com/image/fetch/$s_!83np!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2aaef37-e30f-4585-b290-3263a6a54293_640x356.png 848w, https://substackcdn.com/image/fetch/$s_!83np!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2aaef37-e30f-4585-b290-3263a6a54293_640x356.png 1272w, https://substackcdn.com/image/fetch/$s_!83np!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fe2aaef37-e30f-4585-b290-3263a6a54293_640x356.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Its entirely possible that the hack using bubblegum and duct tape is not really solving any major problem or providing any valuable service? How many customer use that code path? Even the most successful companies make aweful products that fail spectacularly. Maybe the customer uptake will never be high enough to make any difference.</p><p>Even worse, the software or the component can suffer a slow agnoizing death. At that time, is reducing tech debt a priority?</p><h2>Time is limited</h2><p>Take off your software engineer hat and put on your account manager hat. Would it be better to use the time to add another half broken feature? Maybe! If it increases product sales, then why not? There is no shortage of adequately working &#8216;Temporary Hack, Fix Later&#8217; codeblocks. </p><p>10,000 lines of revenue generating rotting code is better than 10,000,000 lines of elegantly written code which passes all linting requirements, but is not really seeing any action. </p><p>Maybe good code doesn&#8217;t exist at all. Maybe it does and we should focus first on practial requirements like minimum viable product, requirements clarification, timely releases, adequate bug fixes.</p><p>This is not a &#8216;good code&#8217; slander, but a pushback against &#8216;good code&#8217; getting more attention than it deserves at the cost of multiple other equally important factors.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xtTE!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fb677311e-6ba9-4b7c-b62f-7f47ddd4ed15_455x695.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xtTE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fb677311e-6ba9-4b7c-b62f-7f47ddd4ed15_455x695.png 424w, https://substackcdn.com/image/fetch/$s_!xtTE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fb677311e-6ba9-4b7c-b62f-7f47ddd4ed15_455x695.png 848w, https://substackcdn.com/image/fetch/$s_!xtTE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fb677311e-6ba9-4b7c-b62f-7f47ddd4ed15_455x695.png 1272w, https://substackcdn.com/image/fetch/$s_!xtTE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fb677311e-6ba9-4b7c-b62f-7f47ddd4ed15_455x695.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xtTE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fb677311e-6ba9-4b7c-b62f-7f47ddd4ed15_455x695.png" width="455" height="695" data-attrs="{&quot;src&quot;:&quot;https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/b677311e-6ba9-4b7c-b62f-7f47ddd4ed15_455x695.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:695,&quot;width&quot;:455,&quot;resizeWidth&quot;:455,&quot;bytes&quot;:39097,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xtTE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fb677311e-6ba9-4b7c-b62f-7f47ddd4ed15_455x695.png 424w, https://substackcdn.com/image/fetch/$s_!xtTE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fb677311e-6ba9-4b7c-b62f-7f47ddd4ed15_455x695.png 848w, https://substackcdn.com/image/fetch/$s_!xtTE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fb677311e-6ba9-4b7c-b62f-7f47ddd4ed15_455x695.png 1272w, https://substackcdn.com/image/fetch/$s_!xtTE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2Fb677311e-6ba9-4b7c-b62f-7f47ddd4ed15_455x695.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">XKCD: Good Code https://xkcd.com/844/</figcaption></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.manishsinha.me/p/tech-debt-isnt-necessarily-bad/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://blog.manishsinha.me/p/tech-debt-isnt-necessarily-bad/comments"><span>Leave a comment</span></a></p><p></p>]]></content:encoded></item></channel></rss>