← Back to Blog

Introducing Skint Labs

AI customer service infrastructure for Shopify and an open-source firewall SDK for LLM applications.


Skint Labs builds AI infrastructure for e-commerce. This post provides a technical overview of both products and the architecture decisions behind them.

Jerry: AI Customer Service for Shopify

Jerry is a customer service platform that connects directly to a Shopify store's data layer. Rather than matching user queries against static FAQ documents, Jerry operates on live data. The product catalog, order records, return policies, and fulfilment status are all accessed through the Shopify Admin API.

The core of Jerry is a semantic search engine backed by vector embeddings. When a store installs Jerry, the entire product catalog is synced and embedded. Customer queries are matched by meaning rather than keywords, so a question like “something warm for hiking under $80” returns relevant, in-stock results.

Order tracking is handled in real time. Jerry retrieves fulfilment status, carrier tracking data, and delivery estimates directly from Shopify. Returns and refunds are processed against the store's configured policies, with eligibility checks handled before any action is taken.

The chat widget is built in React and TypeScript, isolated via shadow DOM so it doesn't interfere with the host storefront's styles or scripts. Voice output uses the OpenAI TTS API with support for 8 languages. Stores can configure their preferred language from the dashboard.

Revenue attribution is tracked through a 24-hour attribution window. When a customer interacts with Jerry and subsequently places an order, the conversation is linked to the sale via Shopify's order creation webhooks. This gives store owners a direct measurement of the platform's return on investment. Jerry is also trained to cross-sell and up-sell based on what is in the customer's cart and browsing context.

The backend runs Python and FastAPI, with Groq serving Llama 3 as the language model, Pinecone for vector storage, and Stripe handling subscription billing. Deployment is on Railway with continuous deployment from the main branch.

WonderwallAi: Open-Source AI Firewall

WonderwallAi is a Python SDK that provides input and output scanning for LLM applications. It was originally developed as Jerry's internal security layer after prompt injection and data exfiltration attempts were observed in production.

The architecture uses four distinct layers, each targeting a different threat category.

Semantic Router. Computes cosine similarity between the incoming message and a configurable set of allowed topics using the all-MiniLM-L6-v2 embedding model. Topics are defined in plain English. The computation is local, requires no API calls, and completes in under 2ms. Messages that fall below the similarity threshold are rejected before reaching the LLM.

Sentinel Scan. An LLM-based binary classifier that evaluates messages the semantic router passes through. This layer targets sophisticated injection attempts that are semantically close to allowed topics but contain hidden instructions. The classifier runs on Groq and is only invoked when the semantic router score falls within an ambiguous range, keeping the per-request cost low.

Egress Filter. Scans LLM output before it reaches the user. Detects leaked API keys across 10+ provider formats, flags personally identifiable information, and checks for canary token presence. The canary token mechanism works by injecting a deterministic, session-unique string into the system prompt. If the string appears in the model's response, it indicates successful prompt extraction. This method produces zero false positives.

File Sanitizer. Validates uploaded files by magic byte signatures rather than file extensions, and strips EXIF metadata from images to prevent GPS and device data leakage.

The SDK is published on PyPI under the MIT license with 59 passing tests. It is LLM-agnostic and framework-agnostic. It functions as middleware between the user input and the model call. A hosted REST API is also available for teams that prefer an HTTP integration over a Python dependency.

Pricing

Jerry operates on a flat monthly subscription plus a per-resolution fee. Base is $49/month with 500 conversations included, Growth is $149/month with unlimited conversations, and Elite is $499/month with dedicated support. All tiers charge $0.25 per AI resolution. There is no revenue share. The first 150 customers receive 50% off for life.

WonderwallAi's SDK is free and open source. The hosted API offers a free tier at 1,000 scans/month, with paid plans at $19/month (50K scans), $99/month (500K scans), and $249/month (2M scans). The first 300 paying customers receive 50% off for life.

Architecture Notes

Both products share the same near-black UI palette and deploy to Railway. Jerry's conversation engine uses a pipeline architecture where each stage (intent classification, entity extraction, data retrieval, response generation) operates independently and fails gracefully. If Pinecone is unreachable, Jerry falls back to keyword search. If the billing service is down, conversations continue and usage is reconciled later.

WonderwallAi is fail-open by default. If any protection layer encounters an error, the message passes through rather than blocking the user. This is a deliberate design choice. A security tool that degrades user experience during transient failures will be disabled by developers, which is worse than no security at all.

Links

Jerry The Customer Service Bot: jerry.skintlabs.ai  |  Live Demo

WonderwallAi: wonderwallai.skintlabs.ai  |  GitHub  |  PyPI

Skint Labs: skintlabs.ai