Nothing Special   »   [go: up one dir, main page]

AI驱动的前端开发

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

AI驱动的前端开发​

We can only see a short distance ahead, but we can see plenty there that needs to be done. -
Alan Turing​

Unleash the power of AI​


Open AI GPTs are powerful, but how to unleash its power in our daily dev work need some
insights. As a frontend(full-stack) developer, I summarize some best practices for AI Driven
FrontEnd Development tips & patterns based on my daily work and research.​

Hope such experience share can help! And here we go ~ 🚀​

Github Copilot​
Use it without any hesitation. 😎​
🤖 Github Copilot have already shown us the power of AI in our daily work. It's a great to work as
a pair in the fllowing ways with its paid features:​

• Direct code completion: when you type code, it will suggest you the code completion
according to your context, this is the most useful feature for me, it works so naturally and I
can focus on the code logic instead of the code syntax.​

:::​

Direct code completion can use multiple choices to select the best one, but it's not always the
best one, so you need to be careful with it.​

:::​
• Write tests with Github Copilot Chat feature, you can use /test command to ask Copilot
to write tests for you, it's a great way to write tests, but you need to be careful with it,
because it's not always the best test cases, you need to review it and modify it to make it
• better.​
Explain code if you have doubts about the code, you can use /explain command to ask
Copilot to explain the code for you, feel natural and useful.​
• Fix bugs and errors you can fix bug and error especially about typescript errors with single
hover on the error and click the Fix button.​
:::​

Show the error hint and fix with Copilot:​

Fix the error with Copilot:​


This works out so well for me, especially for typescript errors. AI can find its way ~​

:::​

• Chat with your code context: in your editor, you can chat to the code you selected as
context, or to your current file and even code language server APIs, they man be caught by
Copilot and used as context to generate code for you.​
• Enterprise Level Features: may contain more features to integrate with: ​
◦ CLI enhancement: for code commit, command suggestions like Wrap did.​
◦ Code and information security: for code security concerns of Business.​
◦ Support embedding and use your own code and documents as context to perform tasks
above, so you can Chat to your documentation and your code-base, that's quite useful
for some conditions when you want to gain more insights from your code-base and
documentation.​
◦ Fine-tune the AI model to fit your business needs.​

If without the enterprise level features, we can implement it via vscode extension to access the
related code context and handle it to OpenAI or other API service vendor to generate content we
need in a cheaper way.​
Code Generation Patterns​

Before we start, I have to say: Do use latest and strongest model, like GPT-4 series to solve your
dev problem and upcoming code-generation patterns. Compared with gpt-3.5, it is more
reliable, more powerful!​

Sometimes when the code context become more complex, we need to describe the
requirements in a more structured way to generate code, here are some patterns:​

• Maintain your own prompt garden in Github: you can edit your prompt with git on
Github. ​
◦ my prompt garden is here: 🌟 Arno's Prompts, maybe, you can get inspired from it.​

• use POE liked, Similar bots/ agents Creator to solve your repeated or domain-context
related problems. I used to create lots of dev related bots to help me solve my dev problems: ​
◦ with simple system prompt , we can create bot like simple-agent to work on specific
domain tasks.​
◦ with some extra files as Knowledge to be retrieved as prompt context.​
◦ place your prompt inside your project source code such as *.prompt.md files, to give
the context of code generation in convenience of future modifications and re-usages,​

Let's take my next.js + antd component generator bot for example, its system prompt is quiet
simple:​

1 1. use AntDesign Component Library to create a component


2 2. use TypeScript version 5
3 3. use React Hook Style
4 4. generate the text start with ```tsx and end with ```
5 5. do not use `export default` syntax, use `export const` instead
6 6. add `'use client';` on the first line of the code if user use next.js
project
7 7. use `Typography` for basic text display
8 8. use `tailwindcss` as class utility for custom style if necessary(better
without it unless you need to implement style yourself)
9 9. try to use `<Skeleton />` for loading state in the component render
10 10.
11 do output code without explanation
12
13 Here are optional rules:
14 * if you are generate table component, you should consider table column width,
eclipse text, and add some formatters if necessary
15 * if column width is not specified, you should use `auto` width and consider
use scroll feature of table if table is too wide
16 * use the standard `useSwr` or `useMutation` from swr lib or `@tanstack/react-
query` API if user required data fetching
17 Based on the context given below work:
18 <TSXCode>

But it works perfectly to generate my project code like:​

1 'use strict';
2 import React from 'react';
3 import { Descriptions, Skeleton } from 'antd';
4 import useSWR from 'swr';
5 import { MaasCodeBenchmarkReport } from './your-api-definition-file';
6 interface PressureTestReportProps {
7 benchmarkId: number;
8 }
9 interface BenchmarkMetrics {
10 avgGpuUsage: string;
11 avgGpuMemUsage: string;
12 avgCpuUsage: string;
13 avgMemUsage: string;
14 avgQPS: string;
15 avgErr: string;
16 avgRT: string;
17 }
18 interface PressureTestReportData {
19 benchmarkMetrics: BenchmarkMetrics;
20 benchmarkLog: string;
21 isGPUError: string;
22 minConsumption: string;
23 }
24 export const PressureTestReport: React.FC<PressureTestReportProps> = ({
benchmarkId }) => {
25 const { data, error } =
useSWR(`/admin/maas/benchmark/report/${benchmarkId}`, MaasCodeBenchmarkReport);
26 if (error) return <Typography.Text type="danger">Failed to load data!
</Typography.Text>;
27 if (!data) return <Skeleton active />;
28 const reportData: PressureTestReportData = data.data;
29 return (
30 <Descriptions layout="vertical" bordered>
31 <Descriptions.Item label="Average GPU Usage">
{reportData.benchmarkMetrics.avgGpuUsage}</Descriptions.Item>
32 <Descriptions.Item label="Average GPU Memory Usage">
{reportData.benchmarkMetrics.avgGpuMemUsage}</Descriptions.Item>
33 <Descriptions.Item label="Average CPU Usage">
{reportData.benchmarkMetrics.avgCpuUsage}</Descriptions.Item>
34 <Descriptions.Item label="Average Memory Usage">
{reportData.benchmarkMetrics.avgMemUsage}</Descriptions.Item>
35 <Descriptions.Item label="Average QPS">
{reportData.benchmarkMetrics.avgQPS}</Descriptions.Item>
36 <Descriptions.Item label="Average Error Rate">
{reportData.benchmarkMetrics.avgErr}</Descriptions.Item>
37 <Descriptions.Item label="Average Response Time">
{reportData.benchmarkMetrics.avgRT}</Descriptions.Item>
38 <Descriptions.Item label="Benchmark Log">{reportData.benchmarkLog}
</Descriptions.Item>
39 <Descriptions.Item label="Is GPU Error">{reportData.isGPUError}
</Descriptions.Item>
40 <Descriptions.Item label="Minimum Consumption">
{reportData.minConsumption}</Descriptions.Item>
41 </Descriptions>
42 );
43 };

• OpenAI's multi-modal AI APIs or similar API like Google's Gemini Vision to generate code
with image in a simple way:​
Take the manually draw UI image for example, we can use multi-modal AI to generate code for
us, with GPT-4-Vision API.​

Tools and Platforms​


• Component based code generation platform like v0.dev by Vercel visual pattern
components based. ​
◦ These components and suitable for RSC(react server component) scenarios, it works
perfectly with some simple visual design conditions.​
• OpenAI GPTs with more complex agents like: tools / functions calls / file-context embedding /
... to make dev experience even better in some specific complex tasks. ​
◦ Byte-dance's Coze is another nice choice to replace OpenAI's GPTs.​
Future Patterns​
• Full RAG and Fine-tuning pattern, as enhancement for complex tasks. Use patterns of RAG
to combine external knowledge as more accurate context to feed LLM for more accurate
output, such as cases of code snippets, PRDs, Images and more structured data.​

Image from paper of Retrieval-Augmented Generation for Large Language


Models: A Survey

• Semi-auto-meta pattern: human organize key strokes of multiple agents to complete a


complex task with structured complex context (very matured AI pattern in the future) inside
one work in one workspace (like Notion) to structure the context and provide the
context to AI agents to keep complex operations in one place. -> idea from elaboration studio
of Arno.​
• Full-auto-meta pattern: multiple agents working together to complete a complex task with
structured complex context (very matured AI pattern in the future).​

You might also like