Back to list
Ekehi エンジニアリングスプリント 2:African 女性起業家向けのリソース発見エンジンの構築
Ekehi Engineering Sprint 2 — Building the Ekehi Resource Discovery Engine
Translated: 2026/3/14 10:12:55
Japanese Translation
African 女性起業家のための資金発見プラットフォームの機能設計とリリース方法を紹介します。7 人のチームを指揮し、コードベースに対して手作業も行うことでチームを統括しました。前週の記事で述べた通り、Ekehi はアフリカ全域の女性主導の事業向けに構築されたリソース発見プラットフォームです。ここで、統合、審査、フィルタリングが可能な場所で資金機会、トレーニングプログラム、およびクレジット製品を表示します。注釈:記事を同時にリポジトリで確認することで、より詳細な文脈が提供されます。
エンジニアリングリードとして、チームの主な役割は、Ekehi を現実のものにする 3 つの主要機能をリリースすることでした:
機能 3.1 — 資金機会:VC、助成金、アクセラレータ、ローン、その他アクティブな資金に関する検索可能でフィルタリング可能な目录
機能 3.3 — トレーニングと能力構築:女性起業家向けの厳選されたビジネスプログラム、ブートキャンプ、およびアクセラレータのリスト
機能 3.4 — セクター分類:すべてのリソースタイプでの精密なフィルタリングを可能にする一貫した分類体系
7 人のフロントエンドコントリビューター。ゼロから構築されたバックエンド。1週間。
機能コードの一行を書く前に、一つ疑問に思ったことがあります:データはどこにあるのか、そしてフロントエンドはそれをどのように取得するのか?(これは実際には 2 つの質問を 1 つにまとめたものですが、大まかな要点はご理解いただけましたか)。
制約事項としてすでにセットアップされていたのは、フロントエンドには Netlify、データベースには Supabase の使用でした。これらの間、フロントエンドが直接 Supabase を呼ぶのではなく、Render に Node.js/Express API レイヤーを導入する必要性がありました。
Client (Netlify) → Node.js/Express API (Render) → Supabase (PostgreSQL + Auth)
これは意図的な決定でした。フロントエンドから直接 Supabase を呼び出すと、クライアントサイド JS に API キーを公開する必要があり、RLS(Row Level Security)を使用しても、これは私が望まぬセキュリティ表面を拡大させることとなります。Express サーバーは、クライアントには公開されない環境変数にサービスロールキーを保持します。サーバーが単一のセキュリティ境界となりました。トレードオフとして追加のネットワークホップが発生しましたが、この使用ケースにおいて(発見ツールの読み取り操作が主体)、これは正しい選択でした。
Supabase の Row Level Security(RLS)はすべてのテーブルで有効化されていますが、サーバーはサービスロールキーを使用してこれをバイパスします。これが逆方向に見えるかもしれません:RLS を有効化しながらバイパスすることはなぜか?答えはディフェンスインDepth によるものです。サーバー層での何かの構成ミスが発生したケースを想定し、RLS はセーフティネットとして機能します。実際には Node.js/Express サーバーが単一のゲートであり、すべてのリストクエリに approval_status = 'approved' をハードコードします。フロントエンドが送るどんなクエリパラメータでも、承認されていないレコードは到達できません。
バックエンドは厳格な 4 レイヤールシステムとして設計されました:Route → Controller → Service → Supabase SDK。コントローラーはデータベースを直接触らず、サービスは req または res を触れません。これは単にクリーンなコードの好望に過ぎません。各層は個別に置き換え可能かつテスト可能にします。Supabase クエリの変更が必要だった場合、サービスファイルだけを修正し、レスポンスフォーマットの変更が必要だった場合はコントローラーのみを変更しました。Render の無料バージョンは 15 分間の不活動後にスピンダウンするため、毎日最初のユーザーには 30 秒のクールドスタートが発生します。私はメタエンドポイントを 15 分ごとに ping する cron ジョブを設定し、インスタンスを温存させました。これは小さな運用上の詳細ですが、ユーザー体験に大きな影響を与えます。
機能 3.1 と 3.3 はどちらも多軸フィルタリングを必要としました。生 SQL 文字列や複雑なクエリ DSL を構築する代わりに、各フィルタ条件を条件付きに Supabase クエリオブジェクトに適用しました:
let query = supabase.from('funding_opportunities').select(FIELDS, { count: 'exact' }).eq('approval_status', 'approved'); // 常に適用される——クライアントパラメータではない
if (search) query = query.or(`opportunity_title.ilike.%${search}%,...`);
if (sector) query = query.contains('sectors', [sector]);
if (country) query = query.eq('country', country);
{ count: 'exact' } はデータと合計行数を 1 つのクエリで返します——ペイネーションメタデータの第 2 ルートトリップは不要です。すべてのリストエンドポイントは一貫したメタオブジェクトを返します:{ page, limit, total, totalPages, hasNextPage, hasPr
Original Content
How I architected and shipped features of a funding discovery platform for African women entrepreneurs, orchestrating a team of 7 while staying hands-on in the codebase. As stated last week in my previous article, Ekehi is a resource discovery platform built for women-led businesses across Africa. It surfaces funding opportunities, training programmes, and credit products which are aggregated, vetted, and filterable in one place. Note: Going through the repository while reading the article will provide more context As the Engineering Lead, the team's job was to ship the three core features that would make Ekehi real: Feature 3.1 — Funding Opportunities: a searchable, filterable directory of active funding across VC, grants, accelerators, loans, and more Feature 3.3 — Training & Capacity Building: a curated listing of business programmes, bootcamps, and accelerators for women entrepreneurs Feature 3.4 — Sector Classification: a consistent taxonomy enabling precise filtering across all resource types Seven frontend contributors. A backend to build from scratch. One week. Before writing a line of feature code, I had to answer one question: where does the data live, and how does the frontend get it? (this is two grouped into one, but you get the gist). The stack constraint was already set — Netlify for the frontend, Supabase as the database. We had to introduce an Node.js/Express API layer on Render between them, rather than letting the frontend call Supabase directly. Client (Netlify) → Node.js/Express API (Render) → Supabase (PostgreSQL + Auth) This was deliberate. Calling Supabase directly from the frontend would have required exposing an API key in client-side JS — and even with RLS, that creates a surface area I didn't want. The Express server holds the service role key in an environment variable, never exposed to the client. The server became the single security boundary. The tradeoff is an extra network hop. For this use case, which is mostly read operations on a discovery tool, it was the right call. Supabase's Row Level Security is enabled on all tables, but the server bypasses it using the service role key. This might seem backwards — why enable RLS if you bypass it? The answer is defence in depth. RLS is a safety net in case something is misconfigured at the server layer. The real gate is the Node.js/Express server, which hardcodes approval_status = 'approved' into every list query. No matter what query params the frontend sends, unapproved records are never reachable. The backend was structured as a strict four-layer system: Route → Controller → Service → Supabase SDK A controller never touches the database. A service never touches req or res. This isn't just clean code preference — it makes each layer independently replaceable and testable. When a Supabase query needed changing, I touched only the service file. When a response format needed updating, only the controller. Render's free tier spins down after 15 minutes of inactivity — which would mean a 30-second cold start for the first user every morning. I set up a cron job to ping the meta endpoint every 15 minutes, keeping the instance warm. Small operational detail, significant user experience impact. Feature 3.1 and 3.3 both required multi-dimensional filtering. Rather than building raw SQL strings or a complex query DSL, I applied each filter conditionally to a Supabase query object: let query = supabase .from('funding_opportunities') .select(FIELDS, { count: 'exact' }) .eq('approval_status', 'approved'); // always applied — not a client param if (search) query = query.or(`opportunity_title.ilike.%${search}%,...`); if (sector) query = query.contains('sectors', [sector]); if (country) query = query.eq('country', country); { count: 'exact' } returns the total row count alongside the data in a single query — no second round-trip needed for pagination metadata. Every list endpoint returns a consistent meta object: { page, limit, total, totalPages, hasNextPage, hasPrevPage }. Sector classification isn't glamorous, but getting it wrong cascades into every filter in the system. I designed the taxonomy as enum slugs — agriculture_food, technology_digital, fashion_textiles — stored as arrays on each record. This meant one opportunity could span multiple sectors (a common real-world case), and filtering used Supabase's contains() operator against the array. I also built a /meta endpoint that returns all enum values — opportunity types, sectors, stages, cost types, duration ranges — in a single call. Frontend components populate their dropdowns from this rather than having hardcoded option lists scattered across multiple files. Every endpoint — success or error — returns the same envelope shape: { "success": true, "message": "...", "data": [...], "meta": {} } I documented every endpoint in endpoints.md with request/response examples. This wasn't just good practice — with 7 contributors building frontend integrations, a shared reference prevented mismatched field names and assumptions about response shapes from becoming bugs. Before features could be built, the frontend needed an architecture that 7 contributors could work within without constant coordination. I approached this in three layers: a component library, a module system, and a database schema that wouldn't break filtering. Rather than leaving each contributor to build UI primitives from scratch — and ending up with 7 different button styles — I built a shared component library under client/shared/components/, each component following the same static factory pattern: const btn = Button.create({ label: 'Apply now', variant: 'primary' }); container.appendChild(btn); Every component has one public method, create(), that returns a DOM element. Internal rendering logic is hidden behind ES2022 private class fields (#buildClasses(), #buildHTML(), #attachEventListeners()). Contributors couldn't accidentally break internals — the only surface they ever touched was the public API. The library covered: Button — 4 variants (primary, secondary, outline, ghost), 3 sizes, icon support, renderable as for link CTAs Input — form input with validation states Dropdown — custom styled select with keyboard dismissal, click-outside-to-close, and onChange callback SearchBar — input + search button, fires onSearch on button click or Enter Nav — self-mounting; drop
anywhere and import the script, it renders itself. Handles mobile hamburger menu, active link detection, and authenticated vs unauthenticated CTA states Footer — same self-mounting pattern Every component was documented in docs/components/ with a full API reference, usage examples, and instructions for extending it. The goal was that any contributor could pick up a component without asking me how it worked. Last sprint, every HTML page was loading 4–6 type="module" is automatically deferred — no load-order issues. ES modules are cached — auth.service.js imported by both nav.js and login.js evaluates only once. Contributors could add a component to their page with a single import line, without touching HTML at all. A full migration plan was written in docs/setup/es-modules-migration.md before executing it — mapping every file that needed changes, every new import/export statement, and every HTML page that needed its script tags collapsed. The migration was executed as a single PR (#75) to avoid a partial state where some pages used modules and others didn't. This was the most consequential piece of work in the sprint, and the least visible. When wiring the filter queries, I discovered the database schema would break filtering by design. Categorical fields like sector and stage_eligibility were stored as free-text varchar — values like "Technology & Digital Services, Financial Services & Fintech" comma-separated in a single column. A standard .eq('sector', 'technology_digital') would never match. The schema was refactored from scratch: PostgreSQL enums for single-value categoricals (opportunity_type, status, format) — validation enforced at the database layer, not application code text[] arrays for multi-value fields (sectors, stages) — a single opportunity can belong to multiple sectors, which is the real-world case GIN indexes on every array column — PostgreSQL's @> operator with a GIN index turns a multi-sector filter into a fast indexed lookup Lookup tables (sectors, stages) as the canonical source of display names, decoupled from the enum slugs used in queries I chose text[] arrays over junction tables deliberately. Supabase's JS SDK maps .contains('sectors', ['technology_digital']) directly to PostgreSQL's @> — one line, no JOINs, no raw SQL. Junction tables would have required supabase.rpc() or nested filters that broke the existing service layer pattern. The migration ran as 8 sequential scripts, each documented with rollback considerations. The data mapping exercise — converting "Grant-NGO" to grant_ngo, "Rolling Applications" to rolling_applications, fixing edge cases where values were stored without spaces after commas, took as long as writing the migration code itself. The result: filtering just works. .contains('sectors', [sector]) against a GIN-indexed text[] column is both correct and fast. With 7 contributors and little daily standup, documentation was how I kept the team unblocked. By the end of the sprint, the docs/ directory contained: docs/components/ — full API reference for every shared component docs/api/endpoints.md — every endpoint with request/response examples, all query params, all error codes docs/setup/system-design-case-study.md — the full architectural rationale, for onboarding and for the team's own understanding of what they were building on docs/setup/es-modules-migration.md — the migration plan before execution docs/setup/db-refactor.md — the schema refactor with every migration script, data mapping, and verification query documented A contributor building the training page filter section shouldn't need to ask me what the Dropdown API is, what query params the /trainings endpoint accepts, or what slug values are valid for programme_type. That information lived in the docs. The friction of building fell from "wait for the lead to answer" to "read the reference." Midway through the sprint, I caught a subtle but critical bug: the opportunities page would load correctly for unauthenticated users, but return an empty array immediately after login. The root cause was a Supabase singleton contamination bug. The auth.service.js was calling signInWithPassword() on the shared service role client — the same singleton used for all database queries. Even with persistSession: false, the GoTrueClient stores the returned user JWT in memory as currentSession. Every subsequent database query then sent Authorization: Bearer instead of the service role key, making PostgREST apply RLS. Since there's no permissive RLS policy for the authenticated role, queries returned empty. The fix was architectural: a separate Supabase client initialised with the anon key, used exclusively for user-facing auth operations. The service role singleton is never touched by auth flows. // auth.service.js — separate client, never shared const authClient = createClient(supabaseUrl, supabaseAnonKey, { auth: { autoRefreshToken: false, persistSession: false }, }); This is the kind of bug that's invisible in testing and devastating in production, because it only manifests after a user successfully logs in. I decomposed each feature into discrete GitHub issues with explicit acceptance criteria and assigned them across the team. The filter section for opportunities, the training page UI, the login wiring, the signup wiring, the navbar auth state — each became a separate issue with clear inputs and outputs. Some contributors didn't complete their assignments before the sprint deadline. Rather than letting work stall, I reassigned and in several cases picked up the work myself. I reviewed every PR that touched the three core features. Two patterns emerged in reviews that I pushed back on consistently: PR #63 — Signup wiring: Requested changes before approval. The initial implementation had issues with how the auth flow was handling the response from the server — needed corrections before merge. PR #66 — Training & Resources filter section: Requested changes before approval. The initial UI wiring wasn't aligned with the component API established. On both, the aim was on consistency with the patterns the rest of the codebase had already established. Inconsistency at the integration layer is what creates bugs that take hours to trace. Once the backend was live, I oversaw the integration work. Two issues surfaced during review: Shared utilities extracted to prevent duplication. Both pages needed the same date formatting and amount scaling logic. Rather than letting each page carry its own copy, I extracted formatAmount, formatDate, daysUntil, humanize, and buildQueryString into a shared opportunity.utils.js module — imported by both the listing and detail pages. Intl.NumberFormat memoization. The original amount formatter was constructing a new Intl.NumberFormat instance on every card render. On a listing page with 20 results, that's 40 expensive constructor calls per page load. I added a Map-based cache keyed by currency code, one construction per currency, reused on every subsequent call. By end of sprint: A live Express API on Render serving Features 3.1 and 3.3, with full filter support, pagination, and a consistent response contract An opportunity detail page with full listing data, deadline countdown, sector/stage tags, and an apply CTA Filter and search wired end-to-end on both the opportunities and resources pages A shared sector taxonomy (Feature 3.4) implemented as enum slugs across the database, API, and frontend filter components A /meta endpoint returning all filter enum values for dynamic dropdown population Auth flow (signup, login, logout) wired across the frontend, with a critical singleton bug patched in the backend PRs reviewed, 2 with requested changes before merge Endpoint documentation and system design case study written for the team The filter state on both pages is duplicated — the same filters object shape, the same onFilterChange pattern, the same buildQueryString call. With more time I would extract a shared FilteredPage module that both pages compose from, rather than each carrying their own copy of the pattern. It works now. It will diverge later. The /meta endpoint also isn't being consumed by the frontend yet — filter options are still hardcoded in the JS files. The infrastructure is there; it just needs to be wired in. The most important thing I did this sprint wasn't writing code, it was actually making decisions early enough that the team could move in parallel without stepping on each other. The layered backend architecture, the response envelope, the sector taxonomy, the component API — these were the guardrails that let 7 people build towards the same system without needing a daily sync to stay aligned. Engineering leadership at this scale is mostly about removing ambiguity before it becomes a bug.