<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>N9O — Nuno Coração</title><description>Product thinking, tech experiments, and open source — by Nuno Coração</description><link>https://n9o.xyz/</link><language>en-us</language><item><title>MCP Servers: The USB-C Moment for AI Agents</title><link>https://n9o.xyz/posts/202504-mcp/</link><guid isPermaLink="true">https://n9o.xyz/posts/202504-mcp/</guid><description>Model Context Protocol (MCP) is fast becoming the universal connector for AI agents, enabling a modular, secure, and rapidly growing ecosystem of tools. Here&apos;s why it matters—and what it unlocks.</description><pubDate>Mon, 14 Apr 2025 00:00:00 GMT</pubDate><content:encoded>Model Context Protocol (MCP) is what happens when AI gets a universal connector — think USB-C - but for intelligent systems. It defines a simple client-server protocol that lets AI models tap into tools, data sources, and even complex workflows through lightweight, discoverable, and standardized interfaces.

This piece offers an overview of what MCP is, how it works, why it matters for AI development, and the current state of its adoption—equipping you with both conceptual understanding and practical context.

At its core, MCP (Model Context Protocol) defines a consistent way for AI systems to talk to external tools and data sources using a standardized protocol. Think of it as an interface spec that decouples AI agents from the systems they interact with. Instead of hardcoding each integration, developers define a server that exposes functionality in a known format, and AI clients (like Claude, ChatGPT, or a custom assistant) connect via a local or remote stream using JSON-RPC.

The protocol revolves around a client-server model:

- The **MCP Client** lives inside the AI application. It handles connections, capability discovery, and request routing.
- The **MCP Server** is a standalone program (often a microservice or container) that exposes specific functions (&quot;tools&quot;), data sources (&quot;resources&quot;), and instruction templates (&quot;prompts&quot;) in a format the client can understand.

When the AI agent needs to do something—say, look up a file, query a database, or invoke an external service—it uses the client to send a structured request to the appropriate server. That server executes the logic (like querying an API or scraping a document), and sends the result back to the client, which injects it into the AI&apos;s context.

This separation has powerful implications. First, it abstracts away the complexity of external systems from the AI model. Second, it introduces a reusable, discoverable layer between AI logic and business logic. And third, it enables safety features like controlled access, authentication, and sandboxing—critical when models are allowed to act on external systems.

MCP servers turn isolated AI models into connected, capable systems. By exposing structured context (via resources), actionable capabilities (via tools), and strategic guidance (via prompts), they give AI models the grounding and affordances needed to actually deliver value in real-world applications.

### Why It Matters

Most AI agents today suffer from the same fatal flaw: they don&apos;t *do* much. Sure, they can answer questions or write copy—but when it comes to taking action (querying a database, sending an email, booking a meeting), they need help.

MCP changes this. It equips AI with an interface layer to external systems, allowing agents to reason over live data and take meaningful actions. That turns them from passive advisors into active participants in workflows.

### Anatomy of an MCP Server

Each server exposes three core things:

- **Tools** — Functions the model can invoke (like `send_email`, `run_query`)
- **Resources** — Read-only data the model can load into context (files, records)
- **Prompts** — Templates or examples that help the model use the tool effectively

This structure gives the AI a highly modular, inspectable environment. Tools can be scoped and versioned. Resources can be updated in real time. Prompts can carry domain-specific instructions that standardize behavior across models.

![MCP Architecture Diagram](mcparch.webp)

### Plug-and-Play Interoperability

MCP is open and model-agnostic. That means:

- One GitHub MCP server can work with Claude, ChatGPT, or any other agent.
- One developer can build a connector once, and every AI model can use it.
- Teams can swap out or chain tools without hard dependencies.

This design encourages a &quot;write once, serve many&quot; approach.

### What&apos;s Already Happening

Since its open-source release by Anthropic in late 2024, MCP has rapidly gained traction across the AI industry:

- **OpenAI**: In March 2025, OpenAI announced support for MCP across its products, including the ChatGPT desktop app and Agents SDK.
- **Microsoft**: Collaborating with Anthropic, Microsoft introduced a C# SDK for MCP, facilitating integration with .NET applications.
- **Google Cloud**: At Google Cloud Next 2025, Google unveiled &quot;Agentspace&quot; and the &quot;Agent2Agent&quot; (A2A) protocol.
- **Azure AI**: Microsoft&apos;s Azure AI Agent Service now supports MCP.
- **Enterprise Adoption**: Companies like Block, Apollo, and Sourcegraph have integrated MCP into their systems.
- **Open-Source Ecosystem**: The MCP community has developed over 300 open-source MCP servers.

### Developer Power Move

As a builder, you can now:
- Add new skills to your agent by running a Docker container.
- Write your own MCP server in Python, JS, or C#—SDKs exist for all major stacks.
- Host connectors remotely or locally, on Docker, Kubernetes, or even Cloudflare Workers.

MCP isn&apos;t another dev tool—it&apos;s a **design pattern** for composable AI.

### Strategic Implications

- **Standardization → Ecosystem**: Just like HTTP created the web, MCP is creating a shared AI interface layer.
- **Composable Agents**: One agent&apos;s output becomes another agent&apos;s context, via MCP resources.
- **New Categories**: Entire products are emerging as &quot;agent hubs&quot; or &quot;MCP marketplaces.&quot;

### What will you build?

If you&apos;re building AI tools in 2025, don&apos;t hardcode — build an MCP server. MCP gives your agent the ability to act, scale, and plug into a broader ecosystem.

Check out these starting points:
- [MCP SDKs and Spec](https://modelcontextprotocol.io)
- [Docker MCP Server community repo](https://github.com/docker/mcp-servers)</content:encoded><category>AI Agents</category><category>Developer Tools</category><category>Protocols</category></item><item><title>Execution is King</title><link>https://n9o.xyz/posts/202403-execution-is-king/</link><guid isPermaLink="true">https://n9o.xyz/posts/202403-execution-is-king/</guid><description>As a Product Manager, more often than not, I notice people mixing up ideas and execution in discussions. Both these concepts have entirely unique levels of fidelity to what the finished product will be.</description><pubDate>Tue, 19 Mar 2024 00:00:00 GMT</pubDate><content:encoded>&gt; &quot;Having a plan, even a bad plan, is better than no plan at all.&quot;

As a Product Manager, more often than not, I notice people mixing up _ideas_ and _execution_ in discussions. Both these concepts have entirely unique levels of fidelity to what the finished product will be. It&apos;s important for Product Managers to know the difference between these two concepts, how to manage them, and what importance they should have at different stages of the product development cycle.

## Definitions

### What is an Idea?
An idea is a concept or a vision. It&apos;s the initial spark of creativity that suggests a new way of doing something, solving a user pain point, or addressing a need. Ideas are abundant and can range from the mundane to the revolutionary. However, ideas by themselves are intangible and hold potential rather than value.

### What is Execution?
Execution is the process of taking an idea and turning it into reality. It involves planning, development, and implementation. Execution is where strategy, skill, and effort come into play to transform a concept into a product, service, or result. Unlike ideas, execution is tangible, measurable, and ultimately, what delivers value.

## Problems

Usually, people tend to split the above in a very simplistic way: ideas are about **what** and **why**, while execution is about **how** and **when**. This view is part of the issue. Both concepts ultimately are about **something** that has different fidelity levels defining it across time.

Secondly, people also make the mistake of thinking that one idea has one execution, and typically the right execution lives on their head. In fact, one idea can have multiple different executions (with multiple interesting ones), also the same execution can be achieved by multiple different ideas.

Finally, people often confuse excitement and enthusiasm for an idea with the practicalities and challenges of executing it. This confusion can lead to unrealistic expectations, misalignment of goals, and disappointment towards the end of a project.

## Why does it matter?

While ideas are the seed, execution is the sunlight, water, and soil that allow the seed to grow. An average idea with excellent execution can outperform a brilliant idea with poor execution. Recognizing this balances the focus on the idea and how it will be brought to life, rather than just the idea itself.

## Conclusion

As we navigate the complex landscape of product development and innovation, distinguishing between ideas and their execution becomes not just beneficial, but essential. It informs our strategies, aligns our teams, and ultimately, determines our success in the market. By recognizing the value of execution and dedicating the necessary resources and effort to it, we can transform even the simplest ideas into remarkable realities.</content:encoded><category>Innovation</category><category>Entrepreneurship</category></item><item><title>Evolution of AI and Amara&apos;s Law</title><link>https://n9o.xyz/posts/202401-evolution-ai/</link><guid isPermaLink="true">https://n9o.xyz/posts/202401-evolution-ai/</guid><description>We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.</description><pubDate>Mon, 15 Jan 2024 00:00:00 GMT</pubDate><content:encoded>&gt; &quot;We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.&quot;
&gt; — Roy Amara

It&apos;s unquestionable the impact AI had in the world in the last year. Back in October 2022, I wrote about the fast-paced evolution of AI and how everything that was possible at the time felt like magic. Given everything that happened since then, I think it deserves a follow-up.

Last time I focused on the technology itself, what advancements were key to enabling GPTs, and made some predictions about the future. Some spot on, some maybe not. One year ago, the main topic was the sudden rise of AI applications since the creation of [transformers](https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)). Since then, the speed of innovation hasn&apos;t decreased one bit, quite the opposite.

## Where we are

The AI landscape has dramatically evolved over the last year, marked by significant investments, technological advancements, and a surge in AI applications across various sectors.

### OpenAI and Microsoft
OpenAI&apos;s collaboration with Microsoft, marked by substantial investments, has led to groundbreaking developments like [GPT-4](https://openai.com/research/gpt-4), the [OpenAI API](https://openai.com/blog/openai-api), and the [GPT Store](https://openai.com/blog/introducing-the-gpt-store).

### Nvidia
Nvidia&apos;s role as the leading hardware provider for AI models is pivotal. The surge in their stock price reflects the critical demand for their GPUs, necessary for training and running AI models.

![Nvidia stock price](img/nvidiastock.webp)

### Google
Google&apos;s launch of its [AI models](https://blog.google/technology/ai/google-gemini-ai/) signifies its determination to remain at the forefront of technological innovation.

### Amazon
Amazon has made significant strides in AI, marked by its investments in [Anthropic](https://www.aboutamazon.com/news/company-news/amazon-aws-anthropic-ai), the launch of [Bedrock](https://aws.amazon.com/bedrock/), and the development of [Titan models](https://aws.amazon.com/bedrock/titan/).

### Meta
Meta&apos;s [contribution to open-source AI models](https://ai.meta.com/resources/models-and-libraries/), coupled with technologies like [Ollama](https://ollama.ai), is a game-changer. By enabling the local operation of powerful AI models, these initiatives democratize AI.

### RAG Applications
The increasing use of [Retrieval-Augmented Generation (RAG)](https://research.ibm.com/blog/retrieval-augmented-generation-RAG) techniques marks a significant evolution in AI applications. The most used tools in this space are [llamaindex](https://www.llamaindex.ai) and [langchain](https://www.langchain.com).

## Concerns

### Lack of Knowledge, AGI, and Alignment
The understanding of how neural networks operate is still limited. There&apos;s a concern that AGI could lead to unforeseen and potentially catastrophic outcomes.

### Copyright Issues
As AI models are often trained on publicly available data, copyright concerns, particularly regarding artistic work, have emerged.

### Business Model and Sustainability
Despite the substantial revenues generated by companies like OpenAI, the path to profitability remains unclear.

## What now?

Amara&apos;s law, coined by Roy Amara, a respected researcher and futurist, states:

&gt; &quot;We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.&quot;
&gt; — Roy Amara

![Amara&apos;s Law](img/amara.webp)

### Short-Term Perspectives
In the short term, the excitement surrounding AI&apos;s capabilities can typically lead to inflated expectations. The immediate future of AI is more about incremental improvements and finding effective ways to integrate these technologies into existing systems responsibly and ethically.

### Long-Term Projections
Looking at the long-term impact of AI, we might be underestimating its potential transformative effects. Over time, AI could reshape entire industries, revolutionize scientific research, and alter the fabric of social interactions.

### Conclusion
Amara&apos;s law aptly captures the dichotomy in our perception of technological advancements like AI. The journey of AI is a marathon, not a sprint. It requires careful consideration, ethical stewardship, and a commitment to ongoing research and development.</content:encoded><category>AI</category><category>future</category><category>technology</category></item><item><title>Build your homepage using Blowfish and Hugo</title><link>https://n9o.xyz/posts/202310-blowfish-tutorial/</link><guid isPermaLink="true">https://n9o.xyz/posts/202310-blowfish-tutorial/</guid><description>Just one year ago, I created Blowfish, a Hugo theme crafted to build my unique vision for my personal homepage. I also decided to make it an open-source project. Fast-forward to today, and Blowfish has transformed into a thriving open-source project with over 600 stars on GitHub and a user base of hundreds. In this tutorial, I&apos;ll show you how to get started and have your website running in a couple of minutes.</description><pubDate>Wed, 04 Oct 2023 00:00:00 GMT</pubDate><content:encoded>Just one year ago, I created [Blowfish](https://blowfish.page/), a [Hugo](https://gohugo.io/) theme crafted to build my unique vision for my personal homepage. I also decided to make it an open-source project. Fast-forward to today, and Blowfish has transformed into a thriving open-source project with over 600 stars on GitHub and a user base of hundreds. In this tutorial, I&apos;ll show you how to get started and have your website running in a couple of minutes.

[Blowfish on GitHub](https://github.com/nunocoracao/blowfish)

## TL;DR

The goal of this guide is to walk a newcomer to Hugo on how to install, manage, and publish your own website. The final version of the code is available in this [repo](https://github.com/nunocoracao/blowfish-tutorial/tree/main) - for those that would like to jump to the end.

![Tutorial example](img/01.webp)

The visual style is just one of the many possibilities available in Blowfish. Users are encouraged to check the [documentation page](https://blowfish.page/) and learn how to customize the theme to their needs. Additionally, there are already [great examples](https://blowfish.page/users/) of the theme from other users available for inspiration. Blowfish also offers several extra features in the form of `shortcodes` available out of the box in the theme - check them out [here](https://blowfish.page/docs/shortcodes/) and get inspired.

## Setup your environment

Let&apos;s begin by installing all the tools you need. This guide will cover the steps for Mac so these instructions might not apply to your hardware and OS. If you are on Windows or Linux, please consult the guides on how to [install Hugo](https://gohugo.io/installation/), and [GitHub&apos;s CLI](https://cli.github.com/) for your OS.

Anyway, if you are using MacOS let&apos;s install `brew` - a package manager for mac - as that will help installing and managing the other tools.

```bash
/bin/bash -c &quot;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)&quot;
```

Once `brew` is installed let&apos;s install Git, Hugo and GitHub&apos;s CLI.
```bash
brew install git
brew install hugo
brew install gh
```

Create a folder for your code and open a terminal session into it (I chose _blowfish-tutorial_ in the commands below, feel free to call the folder whatever you want).
```bash
mkdir blowfish-tutorial
cd blowfish-tutorial
```

Once inside the folder, the next step is to initialize your local `git` repo.
```bash
git init -b main
```

Now, let&apos;s create and sync the local repo to a GitHub repo so that your code is stored remotely.
```bash
gh auth login
gh repo create
git push --set-upstream origin main
```

Check the image below for the options I selected for this guide, again feel free to change names and description to your use-case.

![gh repo create example](img/ghcreate.webp)

Finally, create a **.gitignore** file which allows you to exclude certain files from your repo automatically. I would start with something like the example below.

```bash
#others
node_modules
.hugo_build.lock

# OS generated files
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes

# Hugo
public
```

The last step is to save all the changes to the repo.
```bash
git add .
git commit -m &quot;initial commit&quot;
git push
```

## Create site and configure it

With all the tools ready, creating and configuring your site will be easy. Still within the folder you created in the last section, let&apos;s create an empty Hugo website (_with no theme_).

```bash
hugo new site --force .
```

Once the scaffolding finishes, try the command below to run your page. Open a browser on _[https://localhost:1313](https://localhost:1313)_ to see your site…

```bash
hugo server
```

Ups… Page not found – right?
This was expected, even though you created a website, Hugo doesn&apos;t give any default experience – aka your site doesn&apos;t have any page to show.

Next step, let&apos;s install Blowfish using `git submodules` which will make it easier to manage and upgrade to new versions in the future.

```bash
git submodule add -b main https://github.com/nunocoracao/blowfish.git themes/blowfish
```

Next, create the following folder structure at the root of your code directory - `config/_default/`. Now you will need to download [these files](https://minhaskamal.github.io/DownGit/#/home?url=https:%2F%2Fgithub.com%2Fnunocoracao%2Fblowfish%2Ftree%2Fmain%2Fconfig%2F%5C_default) and place them in _\_default_ folder you just created. The final structure should look something like this.

```
config/_default/
├─ config.toml
├─ languages.en.toml
├─ markup.toml
├─ menus.en.toml
└─ params.toml
```

Open the **config.toml** and uncomment the line **theme = &quot;blowfish&quot;** and you are ready to go. Try the running the site again and check the result at _[https://localhost:1313](https://localhost:1313)_

```bash
hugo server
```

You should see something like the image below. Not much yet as we didn&apos;t add any content, but the main skeleton for Blowfish is already in place - just requires configuration.

![blowfish empty site](img/blowfishempty.webp)

Now let&apos;s configure the theme.

&gt; **FYI** This guide will not cover in detail what each parameter available in Blowfish does – for everything available and how to use it, check [Blowfish documentation](https://blowfish.page/docs/configuration/#theme-parameters) for every option in every file.

### menus.en.toml
This file defines your menu structure, for the top banner and the footer. For this guide, let&apos;s create two menu sections: one for _Posts_ and one for _Tags_.
- **Posts** - will display the full list of entries
- **Tags** - automatically generated based on tags placed on each article

To achieve this, make sure the following entries exist in the **menus.en.toml** file. Once the changes are done, you should see the menus appearing by re-running **hugo server**.

```toml
[[main]]
  name = &quot;Posts&quot;
  pageRef = &quot;posts&quot;
  weight = 10

[[main]]
  name = &quot;Tags&quot;
  pageRef = &quot;tags&quot;
  weight = 30
```

### languages.en.toml

This file will configure your main details as the author of the website. Change the section below to reflect your details.

```toml
[author]
   name = &quot;Your name here&quot;
   image = &quot;profile.jpg&quot;
   headline = &quot;I&apos;m only human&quot;
   bio = &quot;A little bit about you&quot;
```

The images for the website should be placed in the _assets_ folder. For this step, please add a profile picture to that folder named _profile.jpg_ or change the configuration above to the filename you chose.

### params.toml

This is the main configuration file for Blowfish. Most of the visual options or customization available can be configured using it, and it covers several areas of the theme. For this tutorial, I decided to use a **background** layout with the **Neon** color scheme.

Add an **image.jpg** to the assets folder which will be the background for the site.

Now let&apos;s jump into the _params.toml_ and start configuring the file. I will focus only on the values that need to be changed, don&apos;t delete the rest of the file without reading the docs. Let&apos;s begin by making sure that we have the right color scheme, that image optimization is on, and configure the default background image.

```toml
colorScheme = &quot;neon&quot;
disableImageOptimization = false
defaultBackgroundImage = &quot;image.jpg&quot;
```

Next, let&apos;s configure our homepage. We&apos;re going with the _background_ layout, configuring the homepage image and recent items. Furthermore, we are using the **card view** for items in the recent category. Finally, let&apos;s configure the header to be fixed.

```toml
[homepage]
  layout = &quot;background&quot;
  homepageImage = &quot;image.jpg&quot;
  showRecent = true
  showRecentItems = 6
  showMoreLink = true
  showMoreLinkDest = &quot;/posts&quot;
  cardView = true
  cardViewScreenWidth = false
  layoutBackgroundBlur = true

[header]
  layout = &quot;fixed&quot;
```

Now configure how the article and list pages will look. Here are the configurations for those.

```toml
[article]
  showHero = true
  heroStyle = &quot;background&quot;
  showSummary = true
  showTableOfContents = true
  showRelatedContent = true
  relatedContentLimit = 3

[list]
  showCards = true
  groupByYear = false
  cardView = true
```

If you run **hugo server** again, you should see something like the image below.

![blowfish no articles](img/blowfishnoarticles.webp)

## Adding content to your site

Create a folder to place your posts in `/content/posts`. This was also the directory configured in your menu to list all the articles. Within that folder, let&apos;s create a new directory and give it the name **myfirstpost**. Within it create an **index.md** file – your article and place a featured.jpg or .webp for in the same directory as the thumbnail for the article. Use the example below to get started.

```markdown
---
title: &quot;My first post&quot;
date: 2023-08-14
draft: false
summary: &quot;This is my first post on my site&quot;
tags: [&quot;space&quot;]
---

## A sub-title

Lorem ipsum dolor sit amet, consectetur adipiscing elit.
```

You can create additional articles to see what your site will look like once there is content in it. Your site should look like the images below. The main page shows the recent articles, each article is connected through others automatically via related section, you have tag aggregation, and full-text search.

![blowfish recent](img/blowfishrecent.webp)
![article view](img/article.webp)
![search](img/search.webp)
![tag view](img/tag.webp)

## Ship it

The only thing missing is to ship your site. I will be using [Firebase](https://firebase.google.com/) for hosting - it&apos;s a free alternative and provides more advanced features if you are creating something more complex. Go to firebase and create a new project. Once that is done, let&apos;s switch to the CLI as it will make it easier to configure everything.

Let&apos;s install firebase&apos;s CLI - if not on Mac check [install instructions on Firebase](https://firebase.google.com/docs/cli).
```bash
brew install firebase
```

Now log in and init firebase hosting for the project.

```bash
firebase login
firebase init
```

Select hosting and proceed.

![firebase init](img/firebasecli.webp)

Follow the image below for the options I recommend. Make sure to set up the workflow files for GitHub actions. These will guarantee that your code will be deployed once there is a change to the repo.

![firebase options](img/firebaseoptions.webp)

However, those files will not work out-of-box, as Hugo requires extra steps for the build to work. Please copy and paste the code blocks below to the respective files within the **.github** folder, but keep the original **projectId** in the files generated by firebase.

### firebase-hosting-merge.yml
```yaml
name: Deploy to Firebase Hosting on merge
&apos;on&apos;:
  push:
    branches:
      - main
jobs:
  build_and_deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Hugo setup
        uses: peaceiris/actions-hugo@v2.6.0
        with:
          hugo-version: 0.115.4
          extended: true
      - name: Check out code
        uses: actions/checkout@v4
        with:
          submodules: true
          fetch-depth: 0
      - name: Build with Hugo
        run: hugo -E -F --minify -d public
      - name: Deploy Production
        uses: FirebaseExtended/action-hosting-deploy@v0
        with:
          repoToken: &apos;${{ secrets.GITHUB_TOKEN }}&apos;
          firebaseServiceAccount: &apos;${{ secrets.FIREBASE_SERVICE_ACCOUNT }}&apos;
          channelId: live
```

The last step is committing your code to GitHub and let the workflows you created take care of deploying your site.

```bash
git add .
git commit -m &quot;add github actions workflows&quot;
git push
```

If the actions tab for your repo, you should see something like this.

![gh actions](img/githubactions.webp)

Once all the steps finish, your Firebase console should show something like the image below.

![firebase console](img/firebaseconsole.webp)

## Conclusion and Next Steps

Now you have your first version of your homepage. You can make changes locally and once you commit your code they will automatically be reflected online. What shall you do next? I&apos;ll leave you with some useful links to get you inspired and learn more about Blowfish and Hugo.

- [Blowfish Docs](https://blowfish.page/docs/)
- [Blowfish Configuration](https://blowfish.page/docs/configuration/)
- [Blowfish Shortcodes](https://blowfish.page/docs/shortcodes/)
- [Blowfish Examples](https://blowfish.page/examples/)
- [Blowfish Users](https://blowfish.page/users/)
- [Hugo Documentation](https://gohugo.io/documentation/)

![blowfish final](img/01.webp)</content:encoded><category>tutorial</category><category>blowfish</category><category>hugo</category></item><item><title>Product-Market Fit: What it is and do you have it</title><link>https://n9o.xyz/posts/202307-pmf/</link><guid isPermaLink="true">https://n9o.xyz/posts/202307-pmf/</guid><description>In the fast-paced and competitive world of entrepreneurship, achieving product-market fit (PMF) is the holy grail. It is the moment when a product or service aligns perfectly with the needs and desires of the target market.</description><pubDate>Sat, 29 Jul 2023 00:00:00 GMT</pubDate><content:encoded>In the fast-paced and competitive world of entrepreneurship, achieving product-market fit (PMF) is the holy grail. It is the moment when a product or service aligns perfectly with the needs and desires of the target market, leading to enthusiastic customer adoption and sustainable growth. But how do entrepreneurs know if they have truly achieved this elusive state?

## What is Product-Market Fit?

Product-market fit refers to the ideal state where a product&apos;s value proposition aligns perfectly with the needs and demands of the target market. At this stage, customers are not only attracted to the product but also become enthusiastic users and advocates.

## Why is it Important?

Product-market fit holds immense importance for entrepreneurs and investors alike. It acts as a catalyst for accelerated growth, giving businesses a competitive edge in the market. Investors look for product-market fit as a validation of a startup&apos;s potential for success and scalability.

## Do You Have It?

Determining whether a company has Product-Market Fit is not a 100% exact science, but it is definitely way more scientific and measurable than some people would tell you. An interesting framework to check if a venture has achieved PMF is the HUNCH framework:

- **H**air on fire value proposition
- **U**sage high
- **N**PS
- **C**hurn low
- **H**igh LTV/CAC

### Hair on Fire Value Proposition

Is the value proposition a must-have need for the target customer, vastly superior to alternatives, and likely to generate high demand and customer enthusiasm?

### Usage High

Examine customer engagement and usage patterns to ensure the product is becoming an integral part of their routines or workflows, and its usage is growing over time.

### NPS Greater Than 40

Calculate the Net Promoter Score (NPS), which measures customer satisfaction and loyalty. A score greater than 40 indicates strong customer advocacy.

### Churn Low

Ideally, the churn rate should be less than 3% per month, demonstrating enduring value and customer satisfaction over time.

### High LTV/CAC Ratio

Aim for an LTV/CAC ratio greater than 3, indicating positive unit economics and the ability to acquire customers profitably at scale.

## How Can You Use It?

### …as a Startup Founder

Founders should use the HUNCH framework to continuously assess their product&apos;s fit with the market. Regularly gather customer feedback and analyze usage data to identify areas for improvement.

### …as an Investor

Investors can use the HUNCH framework as a due diligence tool to evaluate potential investments. Startups that demonstrate strong alignment with the HUNCH criteria are more likely to have achieved product-market fit.

### …as Product Managers

Product managers play a crucial role in optimizing product-market fit. They can use the HUNCH framework to identify specific areas for improvement and prioritize feature development that aligns with customer needs.

## Conclusion

Product-market fit is a pivotal milestone that sets the stage for a successful business. While it&apos;s not a scientifically quantifiable concept, the HUNCH framework provides valuable data points and signals to identify potential product-market fit. Remember, product-market fit is not merely a hunch; it&apos;s a culmination of insights, metrics, and customer signals that reveal whether a business has found its rightful place in the market.</content:encoded><category>entrepreneurship</category></item><item><title>Docker Init</title><link>https://n9o.xyz/posts/202305-docker-init/</link><guid isPermaLink="true">https://n9o.xyz/posts/202305-docker-init/</guid><description>Initialize Dockerfiles and Compose files with a single CLI command</description><pubDate>Thu, 11 May 2023 00:00:00 GMT</pubDate><content:encoded>Docker has revolutionized the way developers build, package, and deploy their applications. Docker containers provide a lightweight, portable, and consistent runtime environment that can run on any infrastructure. And now, the Docker team has developed `docker init`, a new command-line interface (CLI) command introduced as a beta feature that simplifies the process of adding Docker to a project.

&gt; **Note:** Docker Init should not be confused with the internally-used docker-init executable, which is invoked by Docker when utilizing the –init flag with the docker run command.

![Docker init screenshot 1](img/img1.webp)
![Docker init screenshot 2](img/img2.webp)

*With one command, all required Docker files are created and added to your project.*

## Create assets automatically
The new docker init command automates the creation of necessary Docker assets, such as Dockerfiles, Compose files, and .dockerignore files, based on the characteristics of the project. By executing the docker init command, developers can quickly containerize their projects.

To use docker init, developers need to upgrade to the version 4.19.0 or later of Docker Desktop and execute the command in the target project folder.

The current Beta release of docker init supports Go, Node, and Python, and our development team is actively working to extend support for additional languages and frameworks, including Java, Rust, and .NET.

In conclusion, docker init is a valuable tool for developers who want to simplify the process of adding Docker support to their projects.

## See Docker Init in action
To see docker init in action, check out the overview video by Francesco Ciulla on [YouTube](https://www.youtube.com/watch?v=f4cHtDRZv5U).

&gt; **Note:** this post was originally posted externally — go to [Docker&apos;s blog](https://www.docker.com/blog/docker-init-initialize-dockerfiles-and-compose-files-with-a-single-cli-command/) to read the full post.</content:encoded><category>docker</category><category>blog</category><category>release</category></item><item><title>Engineering Friendly Product Manager</title><link>https://n9o.xyz/posts/202211-engineering-friendly-pm/</link><guid isPermaLink="true">https://n9o.xyz/posts/202211-engineering-friendly-pm/</guid><description>The rules of engagement in solid partnership with engineering. The do&apos;s and don&apos;ts as seen from a software developer&apos;s perspective.</description><pubDate>Wed, 02 Nov 2022 00:00:00 GMT</pubDate><content:encoded>The product manager role has been gaining popularity in the tech industry over the recent years. As more companies add PMs to their organisation charts, there is still a lot of experimentation with team setups to find the best alignment possible between product and engineering.

Strategies to make it work are well covered in [Martin Fowler&apos;s website](https://martinfowler.com/articles/bottlenecks-of-scaleups/03-product-v-engineering.html), in this article I&apos;ll focus on more on the engineer&apos;s perspective and what are our expectations for a good product manager.

## What Not to Expect

As an engineer myself I have observed friction coming from both sides, but also some productive partnerships, and while one can argue that the team or the organisation can influence the outcome, it mostly depends on how much each function is willing to collaborate with the other.

Let&apos;s do an exercise and think about these expectations in reverse. I believe the PM role is still early days, and because of that, it assumes different shapes, especially in less mature companies.

### Excel Manager

An ace with macros and a master at reporting progress in the weekly steering. The excel manager cares very little about the product lifecycle and will spend all of her chips in getting the devs to commit to those deadlines.

![Excel Manager](img/dilbert-excel.gif)

### Featurist

Top specialist in 360 market research. She knows all about Steve Jobs and the story of the iPod, and care about the product lifecycle, but can&apos;t afford to lose time building strategies.

![Featurist](img/dilbert-featurist.gif)

### Retired Programmer

Displeased with the idea of being code monkey forever, she abandoned engineering in the search for happiness and success. Looking with regret at the life she left behind.

![Retired Programmer](img/retired-programmer.webp)

### King&apos;s Hand

Why sharing one&apos;s ideas when we&apos;re all here to serve a greater purpose? Like an all-pass filter, the king&apos;s hand is taking no chance at fingers pointing on her direction.

![King&apos;s Hand](img/kings-hand.webp)

## The Single Idea of Product Management

What the above stereotypes have in common (intentionally) is that all of them delegate business calls to the leadership layer. More than design or implementation, the PM is accountable for the entire product lifecycle from idea through implementation to customer feedback and market performance.

That being said, how the partnership is implemented is a different story. The most successful teams I&apos;ve integrated are the ones where the PM is there, sometimes even under the same leadership/reporting line.

## Onboard the Team Into the Business

One of the things that always bothered me is how little engineers know about the products they&apos;re building. One of the advantages of doing business at a lower level, is that this barrier can be broken. Recurring discussions with the team about the product&apos;s performance is a powerful way of fostering innovation and keeping motivation levels high.

## One Roadmap to Rule Them All

Building a technical roadmap while working on a product team was one of the most counter productive experiences I&apos;ve had. While it&apos;s important to keep track of tech debt that needs to be paid, if there&apos;s no buy in from product, experience tells me that those tasks are never going to be implemented.

## Target Dates, Not Deadlines

If you want to stress out an engineer, ask him for an ETA or to commit to a deadline set by leadership. Building software under pressure only causes harm to the business.

## Final Remarks

What the perfect PM should be like is still an open question, but it is clear that if both product and engineering work towards building an effective partnership, the results can be far more productive. From an engineer&apos;s point of view, the ideal PM is not a stakeholder but a peer instead, pretty much like the CEO of a small startup inside the wider company.</content:encoded><category>Product</category><category>Engineering</category></item><item><title>How to Run Stable Diffusion On Your Laptop</title><link>https://n9o.xyz/posts/202210-stable-diffusion-tutorial/</link><guid isPermaLink="true">https://n9o.xyz/posts/202210-stable-diffusion-tutorial/</guid><description>In the last year, several machine learning models have become available to the public to generate images from textual descriptions. This has been an interesting development in the AI space. However, just recently did this technology became available for everyone to try.</description><pubDate>Thu, 06 Oct 2022 00:00:00 GMT</pubDate><content:encoded>In the last year, several machine learning models have become available to the public to generate images from textual descriptions. This has been an interesting development in the AI space. However, most of these models have remained closed source for valid ethic reasons. Until now…

The latest of these models is Stable Diffusion, which is an open machine learning model developed by [Stability AI](https://stability.ai/) to generate digital images from natural language descriptions.

## Initial Notes

A couple of notes before we get to it. I tried several guides online, and I was unable to get a smooth experience with any of them. The main goal of this guide is to provide instructions for how to run Stable Diffusion on an M1.

&gt; *Note: I didn&apos;t try the above Mac guide, as when I found this repo, I had already figured out most of the workarounds needed to get the model to work.*

## Get the Code

Let&apos;s start with getting the code. I am using [InvokeAI&apos;s fork of Stable Diffusion](https://github.com/invoke-ai/InvokeAI).

```bash
git clone https://github.com/nunocoracao/InvokeAI
```

## Get the Model

Now, you need to get the actual model that contains the weights for the network. Just go to the [Hugging Face&apos;s site](https://huggingface.co/) and login, or create an account if you don&apos;t have one. Accept the terms on the model card, and download the file called `sd-v1-4-full-ema.ckpt`.

## Setup Environment

### Install Xcode

The first step is to install Xcode:

```bash
xcode-select --install
```

### Install Conda

Most of the solutions I&apos;ve seen use [Conda](https://docs.conda.io/projects/conda/en/latest/#) to manage the required packages. I ended up using Anaconda:

```bash
conda
```

&gt; *Note: `conda` will require that both `python` and `pip` commands are available in the terminal.*

### Install Rust

When following some other guides, I would always get problems on the next part of the process. After many tries, I figured that I was missing the Rust compiler:

```bash
curl --proto &apos;=https&apos; --tlsv1.2 -sSf https://sh.rustup.rs | sh
```

### Build and Turn On the Environment

Now we will create the `ldm` environment and activate it:

```bash
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 conda env create -f environment-mac.yml
```

If you need to rebuild the environment:

```bash
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 conda env update -f environment-mac.yml
```

*If you are on an Intel Mac the command should be:*
```bash
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-64 conda env create -f environment-mac.yml
```

Now activate and preload:

```bash
conda activate invokeai
python scripts/preload_models.py
```

## Have Fun…

Now it&apos;s time to start to play around with Stable Diffusion. Run:

```bash
python scripts/invoke.py --full_precision --web
```

And open your browser on `localhost:9090`. I&apos;ve been running mine with `512x512` images, around `100` cycles for the final images, and config scale at `7.5`. As a sampler I prefer the results using `DDIM`.

## Disclaimer &amp; Other Options

Even though I installed this on both a Mac and a Windows, the performance on my Windows machine with an Nvidia RTX 2070 was way better. There are a ton of options for running the Stable Diffusion model, some locally, some in the cloud (e.g., Google Colab), so don&apos;t get frustrated if you want to try this out but don&apos;t have access to a machine that can run it.</content:encoded><category>AI</category><category>Stable Diffusion</category><category>Neural Network</category></item></channel></rss>