Skip to main content

Command Palette

Search for a command to run...

Vibe Match: Building AI-Powered Vibe-Based Search

Updated
3 min read
Vibe Match: Building AI-Powered Vibe-Based Search

While many platforms allow you to filter profiles by structured attributes - like hair style, eye color, or hair color - I noticed a big opportunity: what if we could also search by vibe? Things like “loves dancing”, “who loves dogs”, or “bookworm who travels”.

Traditional filter systems aren’t built for this. So I built a concept demo called Vibe Match, designed to showcase how AI and vector search can make profile discovery more human and intuitive.

In this post, I’ll walk you through how it works.

The Great Wall of Filters

Most platforms rely on rigid filters. The more filters you add, the more complex the UI and query logic becomes. You quickly end up with a Great Wall of Filters, and even then, you can’t search for intangible, unstructured traits like personal interests or personality quirks.

That’s where vibe-based search comes in.

High-Level Solution

The idea is simple:

  • Keep structured filters for things like eye color or hair style.

  • Handle unstructured vibe preferences using vector search.

  • Use an AI model to intelligently parse the user’s query and split it into structured filters and vibe-based search terms.

How Vibe Match Works?

1. Data Preparation

  • Donor profiles have both structured attributes (e.g. hair color) and unstructured bios/interests.

  • I preprocess the unstructured text (bios, hobbies, passions, goals, strengths) and convert them into vector embeddings.

  • These embeddings are stored in a vector database. For this demo, I used Datastax Astra, but this would work with Pinecone, Weaviate, or others.

2. User Query Handling

When a user enters a query like:

“donors who love dancing and have brown hair”

  • The query is sent to the backend server.

  • It’s processed by an LLM (via OpenAI’s API) using structured output parsing.

I defined a Zod schema to tell the model what structured fields to extract (like eye color, hair type etc.).
Currently, I’ve provided support for the following filters, we can extend it to support more.

hair_typehair_colorhair_textureeye_color
dimplessiblingsdominant_handfreckles
marital_statuscomplexioneducation_leveljewish_ancestory
logical_creativeserious_sillyintrovert_extrovertallergies
dental_workegg_retrievalvision_qualitydiet
mathematical_abilityscientific_abilitysinging_ability
  • The LLM returns a structured JSON with:

    • Detected filters.

Example Response:

{
  "filters": {
    "hair_color": "brown"
  },
  "vibeQuery": "donors who love dancing"
}

2. Vector Search + Filtered Results

  • The vibeQuery is converted into a vector embedding.

  • I run a Vector search:

    • Apply the structured filters to narrow down profiles.

    • Use the vibe query embedding to find the most similar donor profiles based on vibe.

This combo delivers results that are both relevant and intuitive with a confidence score.

We can see that these profiles do match our vibe search! She is a dancer and has brown hair.

Similar Donor Suggestions

Each donor profile already has its own vector embedding.
When a user views a donor profile:

  • I run a similarity search using that profile’s embedding.

  • The system suggests other donors with a similar vibe, not just those matching filters.

This enhances discoverability and creates a more organic, exploratory experience.

Final Thoughts

This was a fun concept build to explore how AI and vector search can modernize matching experiences. While this demo was built independently for Cofertility as a prototype, I believe this kind of vibe-based discovery can elevate many industries — from fertility tech to dating apps to talent marketplaces.

👉 Source Code: https://github.com/IamDushu/Cofertility-AI

💬 I’d love to hear your experience trying it out!
If you have feedback, thoughts, or just want to say hi, feel free to reach out at hey.dushyanth@gmail.com