Build Your Own Serverless: Part 2
In the previous article, we navigated through the process of building a minimal viable serverless platform. In this post, we are going to focus on enhancing our platform, named cLess
, with two improvements:
- Refining the Go code by leveraging Modularization and Loose Coupling principles.
- Developing an administrative service/API that enables registration of serverless containers.
So far, we have constructed:
Admin Service
Essentially, the administrative service will manage service definitions. Thus, whenever our server receives a request, it will be able to map the hostname to the corresponding service definition effectively.
To begin, let's outline our ServiceDefinition
:
type ServiceDefinition struct {
Name string `json:"name"`
ImageName string `json:"image_name"`
ImageTag string `json:"image_tag"`
Port int `json:"port"`
Host string `json:"host"`
}
The above definition is straightforward and self-explanatory. Now, we need to create a mechanism to perform CRUD (Create, Read, Update, Delete) operations on this structure. For that purpose, let's establish a ServiceDefinitionRepository
and formulate an in-memory implementation for it.
type ServiceDefinitionRepository interface {
GetAll() ([]ServiceDefinition, error)
GetByName(name string) (*ServiceDefinition, error)
Create(service ServiceDefinition) error
Update(service ServiceDefinition) error
}
type InMemoryServiceDefinitionRepository struct {
services map[string]ServiceDefinition
mutex *sync.Mutex
}
func (r *InMemoryServiceDefinitionRepository) GetAll() ([]ServiceDefinition, error) {
r.mutex.Lock()
defer r.mutex.Unlock()
services := make([]ServiceDefinition, 0)
for _, service := range r.services {
services = append(services, service)
}
return services, nil
}
func (r *InMemoryServiceDefinitionRepository) GetByName(name string) (*ServiceDefinition, error) {
r.mutex.Lock()
defer r.mutex.Unlock()
service, ok := r.services[name]
if !ok {
return nil, ErrServiceNotFound
}
return &service, nil
}
func (r *InMemoryServiceDefinitionRepository) Create(service ServiceDefinition) error {
r.mutex.Lock()
defer r.mutex.Unlock()
_, ok := r.services[service.Name]
if ok {
return ErrServiceAlreadyExists
}
r.services[service.Name] = service
return nil
}
func (r *InMemoryServiceDefinitionRepository) Update(service ServiceDefinition) error {
r.mutex.Lock()
defer r.mutex.Unlock()
_, ok := r.services[service.Name]
if !ok {
return ErrServiceNotFound
}
r.services[service.Name] = service
return nil
}
Having established a method to execute CRUD operations on ServiceDefinition
, let's proceed to construct a ServiceDefinitionManager
. This will allow us to decouple the tasks of managing these definitions from the CRUD operations.
type ServiceDefinitionManager struct {
repo ServiceDefinitionRepository
}
func NewServiceDefinitionManager(repo ServiceDefinitionRepository) *ServiceDefinitionManager {
return &ServiceDefinitionManager{
repo: repo,
}
}
func (m *ServiceDefinitionManager) RegisterServiceDefinition(name string, imageName string, imageTag string, port int) error {
service := ServiceDefinition{
Name: name,
ImageName: imageName,
ImageTag: imageTag,
Port: port,
}
service.Host = fmt.Sprintf(HostNameTemplate, name)
return m.repo.Create(service)
}
func (m *ServiceDefinitionManager) ListAllServiceDefinitions() ([]ServiceDefinition, error) {
return m.repo.GetAll()
}
func (m *ServiceDefinitionManager) GetServiceDefinitionByName(name string) (*ServiceDefinition, error) {
return m.repo.GetByName(name)
}
At first glance, the ServiceDefinitionManager might seem redundant given its similarities with the repository. However, as we extend these operations in subsequent sections of this series, its value will become increasingly apparent.
The next step involves creating an API to interact with our service definitions. We'll utilize an instance of the Echo web framework to establish an HTTP API:
func StartAdminServer(manager *ServiceDefinitionManager) {
e := echo.New()
e.GET("/", func(c echo.Context) error {
return c.String(http.StatusOK, "Admin server is running")
})
e.GET("/serviceDefinitions", func(c echo.Context) error {
services, err := manager.ListAllServiceDefinitions()
if err != nil {
return c.String(http.StatusInternalServerError, err.Error())
}
return c.JSON(http.StatusOK, services)
})
e.GET("/serviceDefinitions/:name", func(c echo.Context) error {
name := c.Param("name")
service, err := manager.GetServiceDefinitionByName(name)
if err == ErrServiceNotFound {
return c.String(http.StatusNotFound, err.Error())
}
return c.JSON(http.StatusOK, service)
})
e.POST("/serviceDefinitions", func(c echo.Context) error {
service := new(ServiceDefinition)
if err := c.Bind(service); err != nil {
return c.String(http.StatusBadRequest, err.Error())
}
if !service.isValid() {
return c.String(http.StatusBadRequest, "Invalid service definition")
}
if err := manager.RegisterServiceDefinition(service.Name, service.ImageName, service.ImageTag, service.Port); err != nil {
return c.String(http.StatusBadRequest, err.Error())
}
return c.String(http.StatusCreated, "Service definition created")
})
e.Logger.Fatal(e.Start(fmt.Sprintf(":%d", AdminPort)))
}
Here is the full code for the admin module.
Container Manager:
The container manager module will oversee the operation and management of services, as well as supervise the containers backing them.
Given that the hostname is the sole medium through which the user accesses the service, we require a function capable of mapping a host to a running service.
Presented below is the code for both the `RunningService` struct and the `ContainerManager` interface:
type RunningService struct {
ContainerID string // docker container ID
AssignedPort int // port assigned to the container
Ready bool // whether the container is ready to serve requests
}
func (rSvc *RunningService) GetHost() string {
return fmt.Sprintf("localhost:%d", rSvc.AssignedPort)
}
type ContainerManager interface {
GetRunningServiceForHost(host string) (*string, error)
}
Now that we have defined the ContainerManager
interface, it is time to create an implementation for this container manager. For this purpose, we will develop a DockerContainerManager
type DockerContainerManager struct {
mutex *sync.Mutex
containers map[string]*RunningService
usedPorts map[int]bool
sDefManager *ServiceDefinitionManager
}
func NewDockerContainerManager(manager *ServiceDefinitionManager) *DockerContainerManager {
return &DockerContainerManager{
mutex: &sync.Mutex{},
containers: make(map[string]*RunningService),
usedPorts: make(map[int]bool),
sDefManager: manager,
}
}
func (cm *DockerContainerManager) GetRunningServiceForHost(host string) (*string, error) {
name := strings.Split(host, ".")[0]
fmt.Printf("getting container for %s \n", name)
sDef, err := cm.sDefManager.GetServiceDefinitionByName(name)
fmt.Println("got service definition", sDef)
if err != nil {
return nil, err
}
cm.mutex.Lock()
defer cm.mutex.Unlock()
rSvc, exists := cm.containers[sDef.Name]
if !exists {
rSvc, err = cm.startContainer(sDef)
if err != nil {
fmt.Printf("Failed to start container: %s\n", err)
return nil, err
}
}
if !cm.isContainerReady(rSvc) {
return nil, fmt.Errorf("container %s not ready", sDef.Name)
}
svcLocalHost := rSvc.GetHost()
return &svcLocalHost, nil
}
func (cm *DockerContainerManager) startContainer(sDef *ServiceDefinition) (*RunningService, error) {
fmt.Println("Starting container......")
port := cm.getUnusedPort()
fmt.Println("got port......")
rSvc, err := cm.createContainer(sDef, port)
if err != nil {
return nil, err
}
cm.containers[sDef.Name] = rSvc
cm.usedPorts[port] = true
return rSvc, err
}
// create container with docker run
func (cm *DockerContainerManager) createContainer(sDef *ServiceDefinition, assignedPort int) (*RunningService, error) {
image := fmt.Sprintf("%s:%s", sDef.ImageName, sDef.ImageTag)
portMapping := fmt.Sprintf("%d:%d", assignedPort, sDef.Port)
args := []string{"run", "-d"}
args = append(args, "-p", portMapping)
args = append(args, image)
fmt.Println("docker", args)
cmd := exec.Command("docker", args...)
containerID, err := cmd.Output()
if err != nil {
fmt.Printf("Failed to start container: %s\n", err)
return nil, err
}
rSvc := RunningService{
ContainerID: string(containerID),
AssignedPort: assignedPort,
Ready: false,
}
return &rSvc, nil
}
func (cm *DockerContainerManager) getUnusedPort() int {
// get random port between 8000 and 9000
// check if port is in use
for {
port := rand.Intn(1000) + 8000
fmt.Println("checking port", port)
_, exists := cm.usedPorts[port]
if !exists {
return port
}
}
}
func (cm *DockerContainerManager) isContainerReady(rSvc *RunningService) bool {
if rSvc.Ready {
return true
}
start := time.Now()
for i := 0; i < 30; i++ {
fmt.Println("Waiting for container to start...")
resp, err := http.Get(fmt.Sprintf("http://localhost:%d", rSvc.AssignedPort))
if err != nil {
fmt.Println(err.Error())
}
if resp != nil && resp.StatusCode == 200 {
fmt.Println("Container ready!")
fmt.Printf("Container started in %s\n", time.Since(start))
rSvc.Ready = true
return true
}
fmt.Println("Container not ready yet...")
time.Sleep(1 * time.Second)
}
return false
}
The methods within the DockerContainerManager
have been successfully migrated from the previous implementation and adapted to fit as methods within the struct.
While all the methods hold importance, particular attention should be given to the GetRunningServiceForHost
method.
Here is the full code for the container manager module.
Main Function & Test Run:
Now that we have 2 modules: Admin & Container Manager, we just need to glue the code in our main function and set the http proxy functionality.
var containerManager ContainerManager
func main() {
repo := &InMemoryServiceDefinitionRepository{
services: make(map[string]ServiceDefinition),
mutex: &sync.Mutex{},
}
manager := NewServiceDefinitionManager(repo)
go StartAdminServer(manager)
containerManager = NewDockerContainerManager(manager)
http.HandleFunc("/", handler)
fmt.Println("Starting cless serverless reverse proxy server on port 80")
http.ListenAndServe(":80", nil)
}
func handler(w http.ResponseWriter, r *http.Request) {
// handle admin requests
if r.Host == AdminHost {
proxyToURL(w, r, fmt.Sprintf("%s:%d", "localhost", AdminPort))
return
}
svcLocalHost, err := containerManager.GetRunningServiceForHost(r.Host)
if err != nil {
fmt.Printf("Failed to get running service: %s\n", err)
w.Write([]byte("Failed to get running service"))
return
}
fmt.Printf("Proxying to %s\n", *svcLocalHost)
proxy := httputil.NewSingleHostReverseProxy(&url.URL{
Scheme: "http",
Host: *svcLocalHost,
})
proxy.ServeHTTP(w, r)
}
func proxyToURL(w http.ResponseWriter, r *http.Request, pURL string) {
proxy := httputil.NewSingleHostReverseProxy(&url.URL{
Scheme: "http",
Host: pURL,
})
proxy.ServeHTTP(w, r)
}
Through effective modularization of our serverless platform architecture, the main function is able to focus on initializing the modular components and coordinating communication between them, while proxying of traffic is handled by the proxyToURL
method. This separation of concerns allows for improved maintainability and extensibility of the system.
Test Run
Let's test the platform now:
Add these mapping to /etc/hosts
if you haven't done already:
127.0.0.1 golang.cless.cloud
127.0.0.1 python.cless.cloud
127.0.0.1 java.cless.cloud
127.0.0.1 nodejs.cless.cloud
127.0.0.1 rust.cless.cloud
127.0.0.1 admin.cless.cloud
You can build and run the server using:
go build & ./cless
Let's register for example our rust app:
curl -X POST -H "Content-Type: application/json" \
-d '{"name":"rust", "image_name":"rust-docker", "image_tag": "latest", "port":8080}' \
http://admin.cless.cloud/serviceDefinitions
Let's see if the service was registered successfully and get the hostname:
curl http://admin.cless.cloud/serviceDefinitions/rust
{"name":"rust","image_name":"rust-docker","image_tag":"latest","port":8080,"host":"rust.cless.cloud"}
Now we can access our service:
curl http://rust.cless.cloud
Hello, Docker!%
Conclusion:
The serverless platform was modularized to promote separation of concerns and encapsulation. This enabled us not only to improve code readability and maintainability, but also to provide a service API for managing service definitions in a way that facilitates extensibility and addition of new capabilities. The modular architecture allows the system to evolve gracefully over time.
The complete source code for this project is available in the cLess repository on GitHub for further exploration and utilization.
In the next installment of this series we will work on adding features like sqlite integration, docker go client, new hostname strategy, and graceful shutdown.
Member discussion