Unleash the Power of Node.js and Fastify: Build a Rest API and Connect it to Open AI Chat GPT
Unlock the potential of Node.js and Fastify as we delve into building a high-performance REST API. But that's not all – we're taking it a step further by seamlessly integrating OpenAI's Chat GPT.
In the ever-evolving landscape of web development, Node.js and Fastify stand as formidable pillars of efficiency and performance. Node.js, known for its speed and versatility, and Fastify, recognized for its lightning-fast web framework, together form a dynamic duo for building robust REST APIs. But what if we told you that there's a way to take your API to the next level, infusing it with the capabilities of OpenAI's Chat GPT?
In this comprehensive guide, we embark on a journey through the realms of Node.js, Fastify, and OpenAI's Chat GPT, unlocking their full potential to create an API experience like no other.
Node.js: Fueling the Backend Revolution
Node.js has been a game-changer in backend development, enabling developers to use JavaScript on the server-side. Its non-blocking, event-driven architecture makes it a natural choice for building highly scalable applications. We'll dive deep into how Node.js empowers our project with speed, efficiency, and an extensive library ecosystem.
Fastify: The Swift and Secure Web Framework
Fastify, a web framework for Node.js, is designed for blazing-fast performance. It excels in handling high loads while maintaining a minimalistic and developer-friendly API. We'll explore how Fastify's features and plugins streamline the creation of our RESTful API, ensuring it's both performant and secure.
OpenAI Chat GPT: The AI Language Model Revolution
OpenAI's Chat GPT represents a breakthrough in natural language understanding. With its ability to generate human-like text, it opens up exciting possibilities for conversational interfaces and content generation. We'll integrate Chat GPT seamlessly into our API, creating a dynamic and responsive user experience.
Creating the project
To start the project, we need to create a new Node.js project and install Fastify to our project dependencies, create a new folder, and:
npm init -y
#or
yarn init -y
Now install the Fastify dependency:
npm install fastify
#or
yarn add fastify
Add dotenv
too:
npm install dotenv
#or
yarn add dotenv
Open the project in your code editor:

It's time to create the folder structure and some files to get our project running, I like this structure:

In your package.json
create an entry for "type": "module"
, this will allow the project to use import/export
syntax. In main
entry, change to server.js.
Let's create our server module, open app.js
and write a code that imports the routes file and setup the Fastify:
import Fastify from 'fastify';
import Routes from './routes.js';
const fastify = Fastify({
logger: true,
});
Routes({ fastify });
export default fastify;
In the Fastify options we will enable only the logger.
In the routes.js
temporarily we can register a new route and redirect to the controller:
import * as GPTController from './controllers/GPTController.js';
const gptRoutes = (fastify, _, done) => {
fastify.get('/', GPTController.search);
done();
}
export default async function Routes({ fastify }) {
fastify.register(gptRoutes, { prefix: 'v1/gpt' });
}
This code imports the GPTController
(we will create it) and use the search
method on fastify.get
, this line means that when the user hits the .../gpt
using the GET
HTTP verb we will call the search
function inside the controller.
We got the Routes
function too, this function just registers the routes on Fastify and permits the creation of route prefixes (and other settings), this is very useful for grouping the resources by context or creating versions for each route group.
Open the server.js
file and let's see the code:
import 'dotenv/config'
import fastify from './src/app.js';
try {
const port = parseInt(process.env.PORT, 10) || 7777;
await fastify.listen({ port });
} catch (err) {
throw err
}
Here we just import dotenv
and execute the config setup, import the app.js
file, and start the Fastify server with listen
a function passing the port as a parameter. We get the port number from the Env
.
Create the controller file inside controllers
folder with name GPTController.js
and put a small "Hello World":
export const search = (request) => {
return {message: "Hello World"}
}
Just a small function that returns an object with message
key and as a value; "Hello World".
Fastify automatically serializes the return value of the controller to a JSON and we can speed up this process (yes, serializing is slow) using the response
key on the schema
option for each route, check it here. It is not necessary but is good to know.
Time to run the server and see all the changes that we made:
node server.js

Some logs will be printed on the terminal, this is because we enable the logger, and every request will output some logs too.
Open the server on an HTTP client (I use HTTPie) with the new route and we can see the "Hello World" message:

Okay, the main application is done! Go to check the OpenAI GPT part.