How to create Content type specific Publishing Workflow

We have been working on The Employee Experience Centre project which needed different publishing workflows for a content type used by various departments.

By default, Two modules -Workflow and Content Moderation- are available in Drupal Core which give you one workflow per content type.

In our case, We had another situation. Given the large size of the company, there are multiple departments. And the head of each department wanted to change the content publishing workflow. There would be content in a specific state of workflow. And suddenly, there would be a new directive to create a new workflow. But they wanted to keep following the earlier workflow for content which was already in process.

For this scenario, we had no option but to create a custom module.

  • Default workflow module provides us the moderation state. We created specific state & transition based on the client requirement.
  • Client agreed to define multiple workflows along with states. These were mostly fixed which could be utilized by different departments across the company. So, we created each one using custom code.
  • Then we created a page to enable selection of workflow based on content type. On this page, we created a configuration form to list down all content types and departments. 
  • After selecting a department, we listed down all content types with existing workflows.
  • Workflow can be changed for specific content. This permission was given to very specific roles only.

Routing for configuration form

Workflow configuration form

  • Then in hook_node_presave or in hook_node_update,  we can first check for existing workflow type for content type and department from configuration form. Then, we can set the value of moderation state in node.

// queryCheckExistingWorkflowTypeName service will fetch the workflow type by passing department_name and node type.

We managed to create this work around since project team agreed to have predefined workflow. 

This story is not over yet. Actual requirement was to enable even admin users to create these workflows using configuration page. We are out of ideas on this. Would like to hear your suggestions.

How to Set up Android React Native?

React Native is an open-source JavaScript framework created by Facebook for writing real, natively rendering mobile applications for iOS and Android. It’s based on React and uses Facebook’s JavaScript library for building UI.

Web Developers can write mobile applications that look and feel truly ‘native’, all from the JavaScript library. Just code once and React Native apps are available for iOS and Android.

Why React Native is used
The developer has to write one set of code that is JavaScript code to enjoy the performance of the React Native mobile applications and it will work for both of the platforms, such as iOS and Android. We can reuse React Native components for building both Android and iOS apps. React Native is a great platform for those developers who have expertise in JavaScript as there is no need to learn Android specific Java or iOS-specific Swift. React Native is rapid and UI focused, so it helps to load the app quickly and gives a smooth feel. 

Follow these steps to setup React Native in Ubuntu for Android platform:

1.   Install JDK8 or later versions

sudo apt-get update
sudo apt-get install oracle-java8-installer
sudo apt-get install oracle-java8-set-default

2. Download and Install Android Studio into your system.

    Download android studio package using the following link:
      https://developer.android.com/sdk/index.html?source=post_page

 To launch Android Studio, open a terminal, navigate to the android-studio/bin/ directory and execute studio.sh.

launch Android Studio
  2(a. Check if you have a .bash_profile file in your root directory:

         Type ls -a in your terminal and check for the .bash_profile file.

     If you don’t have a .bash_profile in your root directory :

        Type touch .bash_profile

        Open your bash_profile:

        Type nano / .bash_profile

     And add your SDK path:

    if you have a .bash_profile file in your root directory add your SDK path

   Then ctrl + o to write out, hit Enter and then ctrl + x to close the editor.

3. Install Node.js

Sudo apt -get install -y nodejs

4. Install React Native CLI:

npm i -g react-native-cli

5. Create a new React Native project called ‘MyFirstProject’

expo init MyFirstProject
cd MyFirstProject

 6. Start npm server

react-native start

7. Run your project

-  react-native run-android


This is your first screen and your first project in React Native is complete .

react project      

Creating Secure API using Node.js, Express Js and Passport-JWT

Node.js is a server-side platform built on Google Chrome's JavaScript Engine (V8 Engine). It is an open-source, cross-platform runtime environment for executing Javascript code outside of the browser. Node.js is used for developing server-side and networking applications.

  1. Steps for the Installation of Node.js  
    1. For Windows Users 
      1. Go to https://nodejs.org/en/download/
      2. Click on windows-installer.
      3. Now click on “continue” for all the popup screens.
      4. Check the node version by running the command
        node -v
        Check the node version by running the command: node -v
    2. For ubuntu/mac users
      1. The first step is to check node version using the below-mentioned commands

        $ node -v
        Output:
        $ v5.0.0
      2. The next step is to check npm version using the below-mentioned commands

        $ npm -v
        Output: 
        $ 4.0.0
      3. If ‘YES’, then go to the next step; and if ‘NOT’, then Remove node by -
        $ sudo apt-get remove --purge nodejs
      4. Now, Install again using
        $ sudo apt-get install curl
      5. Download node package

        $ curl -sL https://deb.nodesource.com/setup_10.x | sudo bash -
        Note: You can use any version instead of 10.x such as 8.x, 6.x
      6. Lets install NodeJS package
        $ sudo apt-get install -y nodejs
        Check node and npm version using the above commands and make sure it is greater than or equal to the given value.
  2. Create your first simple Node.js project

    1. Create the folder “node-project”

    2. Create file “app.js” in that and add the below code.

    3. Run command “node app.js”
      Run command “node app.js”

    4. It will print “Hello! World” in the command line.

3. Creating Secure API using Node.js, Express Js and Passport-JWT

Express Js: Express js is a web application framework for Node.js. It is a third-party library, used for routing. 

Passport-JWT: This module lets you authenticate API endpoints using a JSON web tokens. It is used to secure RESTful endpoints without sessions.

Npm: It is a ‘Node Package Manager’, basically a command-line tool, as well as a registry for third party library, which can add our node applications.

Steps For Creating Secure Node Api  

 1. Create folder ‘ Node-project ‘ and inside the folder run command

 npm init

it will create the package.json file. This file will contain the details about the project like name, author, version, dependencies and GitHub related items etc.
create the package.json file containing the details about the project like name, author, version, dependencies and GitHub related items etc

 

2. Then, Run the command inside of the root folder.

npm install --save express passport passport-local passport-jwt jsonwebtoken 

Then, check the package.json. It will contain all the above modules.

3. Create file “app.js” and include the installed modules in app.js file using require keyword.

4. Create one more folder called “API” inside the root folder and create a file called user.js and add the following code as shown below:

Creation and Storage of JWT : 

Const token = jwt.sign({ userName: response.userName, userId: response.userId }, "secretekey");
  1. When user logins, first we check wheather user exists in our database or not.
  2. If user exists, then create the token (which will be the combination of the user object and secret key). It is JWT(JSON Web Token).
  3.  It will be stored in the client-side (typically local storage).
  4.  Whenever a user requests to access API, we will pass the token to our middleware function to verify the token is valid or not. If it is valid, only then we will allow accessing our API endpoints.

 

5. Now add the lines to our “app.js”

app.get('/login', user.login);

6. Now create one more folder called middleware, and inside middleware folder create file passport.js. In passport.js add the following code.

Here, I am using the passport-jwt strategy. Once the token is stored in the client-end while accessing our API, we call this function and decrypt the token using the “secret key” and again we check whether user exists in our database or not. If it exists, it will return the user object as a response and then it will call our API endpoint. If the user does not exist, then it will show the error “Unauthorised”.

7.  Then include this file to our app.js . i.e

require('./api/middleware/passport')(passport);

8. Create one function and fetch all user data in user.js. Add the below-mentioned code.

9. Then add our middleware passport-jwt in app.js.

passport.authenticate('jwt', {session: false})

10. While accessing our api /userData,it will call the middleware. If the token is valid, only then, it will allow accessing the getAllUsers function in user.js or else it will show the error as “Unauthorised”

Adding Custom field in search results for Decoupled Drupal Architecture

Nowadays, most of the sites we are working on are built on decoupled Drupal approach. A decoupled website opens up multiple opportunities. Along with new opportunities, we also get our fair share of challenges. One was where I was tasked to create the module, which can take input (string & filetype) from the front end framework and result the dataset along with metadata e.g. image, date, content type and  File Type.

Before we move on with how we implemented this, we need to understand how search function operates in Drupal site. It has three main parts - Source, Index and Results.

Source refers to any kind of content we have on the website. We parse the content and store the metadata in the index. And we display the result in the front-end.

In our case, the front end was built using AngularJS.

First, we had to Identify the schema in which all of our source data was stored. 

The required search page had a basic set of features like title, description, taxonomy, link and some extra metadata like image, date, type and file type. Since we search across multiple sites, we also needed information about the source from where the item comes from.

I created a custom module to create API which can be used for content search like a REST resource using _controller POST method. 

Below is a basic module to explain how we can create Search API to be consumed by the external applications.

We would need to create files as per the below structure.

customapi_search/

├── customapi_search.info.yml

├── customapi_search.routing.yml

└── src

    └── Controller

        └── SearchAPIController.php


Step 1.  Create module.info.yml file to define the metadata of the module.

Step 2. Create search routing file customapi_search.routing.yml

Additionally, create a customapi_search.routing.yml file in which we can define our path (endpoint), controller and methods.

Step 3. Create a SearchAPIController.php Controller file in which we can define custom _controller with [POST] resources.

In our case, we have used the controller method as rest API using POST method, which extends ControllerBase class and EntityQuery used for fetch data precept to the POST method param value.

Endpoint of Custom search api: /api/content-search

Json query parameters like:

{"q": "test", "firstResult": 0, "numberOfResults": 1000, "filters": {"type": ["page","pdf",”docx”]}, "sortBy": "latest"}

The above module will provide endpoint

Endpoint response output:

Changing of Cloned Reference values while Cloning the Entity in Drupal 8

Recently, I came across a unique situation in Employee experience portal project that I was working on.

As in any enterprise, there were multiple departments in the company generating content for specific departments with varying levels of access as well as security.

To increase synergy as well as collaboration between departments, they had decided to allow authors to clone content from different departments. This was also to enable them to reutilize the design layout as well as content created by others.
 

We realized that this is not an option available within the Drupal ecosystem. We do have an  Entity Clone module available. But it was not solving our issue. The challenge was that we needed to clone an entity which was having existing references and these values should be changed in cloned entity based on certain conditions e.g. security groups assigned to a specific department.

These references were paragraphs, widgets as well as other custom entity types. If we clone the node using create duplicate function, it creates a duplicate node. But then, we have to attach all the field definition from the original node manually.

Challenge was in the entity clone process

  • Base field definitions are already available from the original content. Original content is referencing to existing entities.
  • While creating the duplicate, we have an ID (of an entity) only which is not saved yet and we are trying to attach that definition to newly created duplicated content.

Because of this, the content was not being saved with the new modified value.

We found a workaround by reviewing the entity clone module process further.

During the Entity clone process, it saves the duplicate node twice

  • At first, it creates an exact duplicate of the original content and saves it. On saving, ID gets created and then, attach all the reference fields.
  • And it saves the 2nd time with all the references of original content.

We have modified the references of cloned content while saving it the 2nd time. And we have implemented necessary business logic to modify the references.

The following snippet will help in understanding the solution.

 

To perform any alter operation, we have to implement hook_presave

$entity gives you clone entity during entity clone process.

$original_content gives you the values from parent content from where we are initiating the clone of the new content.

Now, you can implement your business logic inside hook_presave to modify the cloned node reference field value.

With the above code, we can change the clone reference values while cloning the entity. I would love to learn from others if there are any other ways to implement the same.

How to create Custom Menu based on Specific Roles and Groups in Drupal

I was working on a project recently where we came across a very unique situation.

The Project required menu to be shown based on roles.These roles were tied with groups created earlier by previous developer team. Each department wanted to have complete access of the Drupal menu to configure (add/ edit/delete) along with drag & drop option within the department. This was to be accomplished without giving them admin access of project.

Menu creation needed to have a workflow where once menu has been added, it should be draft stage and must be reviewed & published by the head of the department to make it live. There was a need for adding additional fields (text & image) along with each menu item. This was to highlight the content of certain pages in menu drop-down itself.Addition & updation of menu was expected to happen in Drupal dialog (popup).

Challenges

  1. By default, there is no option in Drupal to create menu as per role. There is a contributed module available Menu per Role, but this can be configured only for roles whereas our need was to make it work with groups too.
  2. Since this is configuration entity (schema file) and not a content entity, no option was available to add extra fields in menu items
  3. By default, Drupal modal dialog opens the custom form in a pop-up and on submission of the form, you have to mention close dialog command to close the popup and submit the form. But requirement was to create new form (while adding up new menu item) without closing the modal.

Because of the above three scenarios, we created a custom module to enable the following functionalities.

    • Department-wise Menu Creation Configuration
    • Option to publish menu in Draft state

How we enabled role & group specific menu configuration

  1. Enable Roles & Groups

We created a custom page to list all the menu items:

  • Create a routing file

  • Fetch roles of current user

  • Fetch groups based on the role of that current user and give access to user based on specific role & groups

  1. How we added extra fields in menu item\

  • Created custom entity

  • Created a custom table to link custom entity ID with menu ID:

Create a schema file for creating custom table in drupal 8

  • On creation of promo, add drupal insert query to add in custom table
     

3) How to create a new form without closing the modal

I added following custom ajax command to achieve this

  • Add UpdateMenuCommand.php file under module_name/src/Ajax folder

  • Then in js file, add like this:

Drupal.AjaxCommands.prototype.updateMenu key name mentioned in UpdateMenuCommand.php file.

Use the same in js file.

By following this procedure, we were able to create role based menu which worked with groups related permission. This can also be used when you need an extra field in the menu.

Feel Free to Contact Us for Your Enterprises

Sajari Search Custom Implementation with Drupal for Better Performance

Google discontinued its Site Search as a Service from April 1, 2017. For one of our clients who was using Google CSE, the team decided to implement the Sajari search which is a high-performance custom search for enterprises.

There is a contributed module already available to use Sajari on Drupal websites. But we know that each additional contributed module adds overhead to Drupal, impacting the performance.

In our case, the client was very keen to have minimal impact on performance. So, we decided to build a lightweight custom module for Sajari integration ensuring appropriate custom search matches.

Sajari team gave us the javascript code snippet for the functionality. They also provided the unique key for our website. Based on our research and the documentation provided, we completed the implementation in the following steps.
 

  • Created the search box using the html and added the class that was mentioned by sajari
  • Our objective was to display the result in a specific page. For this, we added a routing page url in the javascript.
  • Now the target URL will accept the query parameter and will display the result on the page.
  • Sajari provides the option to display results in categorized tabs. This would enable these tabs. 

Note: When we moved our site from http to https, only data from http sites was being displayed. So, again we re-crawled the sajari search. Kudos to Sajari team, we have not seen any downtime so far. We were able to display the content in https too.

Feel free to Contact Us for your Enterprises

Flutter - Fast way to develop iOS and Android apps from a single codebase

Flutter is an open-source application SDK that allows you to build cross-platform (iOS and Android) apps with one programming language and codebase. Since flutter is an SDK it provides tools to compile your code to native machine code.

It is a Framework/Widget Library that gives Re-usable UI building blocks(widgets), utility functions and packages. 

It uses Dart programming language (Developed by Google), focused on front-end user interface development. Dart is an object-oriented and strongly typed programming language and syntax of the dart is a mixture of Java, Javascript, swift and c/c++/c#. 

Why do we need flutter?

You only have to learn or work with one programming language that is Dart, therefore, you will have a single codebase for both iOS and Android application. Since you don't have to build the same interface for iOS and android application, it saves you time.

  • Flutter gives you an experience of native look and feel of mobile applications.
  • It also allows you to develop games and add animations and 2D effects.
  • And the app development will be fast as it allows hot reloading. 

Development Platforms:

To develop a flutter application you will require Flutter SDK just like you need Android SDK to develop android application.

The IDEs you will need to develop flutter application are:

Android Studio: It is needed to run the emulator and Android SDK.

VS Code:  VS code is the editor that you can use to write Dart code. (This is not required when we can write dart code in android studio or Xcode).

Xcode:  Xcode is needed to run the iOS emulator.

Steps to install flutter in Linux:

Install Flutter(Linux)

To install flutter in the system follow the official doc.

Now here are some steps to install and running your first hello world android app with flutter:

After you are done with flutter installation from official docs, just open your Terminal and write

flutter doctor

you must see something like this:

Flutter doctor

Now to create flutter application write below command in you preferred directory(please use Latin letters, digits and underscores in the name of the app otherwise you may face some errors)

flutter create hello_world_app

Now you should see the folder structure of the app like this:

Structure | Valuebound

Your application will be in hello_world_app/lib/main.dart

Point to be noted you will write most  or maybe all of your code in the lib directory

Now you can replace main.dart file’s code with the code given below.


import 'package:flutter/material.dart';

void main() =>
    runApp(MyApp()); // main function is the entry point of the application

class MyApp extends StatelessWidget {
  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      home: Scaffold(
        appBar: AppBar(
          title: Text('HELLO WORLD'),
        ),
        body: Material(
          child: Center(
            child: Text('HELLO WORLD!'),
          ),
        ),
      ),
    );
  }
}

In flutter almost everything is a widget, flutter is full of widgets, it provides a widget for everything like Buttons, input fields, tables, dialogues tab bars and card views and list goes on.

Here in the first line, we have material.dart library imported, it is a rich set of material widgets that are implemented by material design


void main() => runApp(MyApp());

The main function is the very entry point of the application which call the runApp function and that takes MyApp widget and parameter

class MyApp extends StatelessWidget {
  @override
  Widget build(BuildContext context) {...}
}

This is a widget that you will use to build your app, it can either be stateful or stateless.

Stateful widget means which has mutable state and this kind of widget must have the createState() method.

Stateless widget means which does not have an internal state, like some image or some text field, it must have the build() method

Our app does not have a stateful widget as we don't have to change any state of the app

So the internal part is like this
MaterialApp() ⇒ a material design widgets wrapper, 
Material() ⇒  Creates a piece of material
Scaffold() ⇒  Creates a visual scaffold for material design widgets.
AppBar() ⇒  is to create a material design app bar for the app
Center() ⇒  creates a widget to center its child widget
Text() ⇒  is a Text widget 


To run this flutter application:

You will need android or iOS emulator or physical devices connected to run this app

You can you below given commands to run the app

flutter run ==> it will run the app on the connected device or emulator
flutter run -d DEVICE-ID ==> will run on a specific device or emulator
flutter run -d all ==>  will run on all connected devices 

after this, you will see a screen something like this

Hello World |Valuebound

Voila, just now we have build our first application using Flutter. This should be good starting point to develop database driven applications. I have built new application, treLo - road side assistance platform, using Flutter. We released this within one week of time. Would love to hear your feedback and kind of ideas you are working on using Flutter.

Drupal Contribution Hour at Valuebound: 2019

Drupal is gaining adoption in enterprises. Drupal 9, which will be released in June 2020, should speed up the adoption even further. Drupal 8.6 released last year September included major improvements in the layout system, new media features, and better migration support. 

The biggest hindrance to the growth of Drupal is the availability of quality developers. With more and more enterprise companies adopting Drupal for various business application including CMS, intranet, extranet & commerce, there is an increased need for experienced Drupal site builders, developers, and themers. To continue the growth momentum, as a community we need to work towards building a bigger talent pool of good Drupal developers. We need to introduce Drupal to more and more people early in their career. Each and every coder becomes the master in his field by experience, developing technical skills, solving difficult problems, being aggressive to learn and teach others. 

Drupal also provides a unique perspective to build a personal brand for every developer - that also by doing what they are comfortable - coding.  

According to the developers, being an open-source contributor is a key selling point in your professional skills. One of the popular open-source CMS is Drupal. Drupal is a growing platform with 1.37 million members and 114,000 members are actively contributing.

We at Valuebound have taken the advice of Kristen Senz, Researcher at Harvard Business School, by heart. - “Companies that contribute and give back learn how to better use the open-source software in their own environment.” and Organized Drupal Contribution Hour on 25th May, 22nd June, and 18th July along with a host of wannabe Drupal devs as well as mentors where we introduced, how everyone can contribute. We helped them to select issues, setup git and creating patches as well as committing the code. 

Bravo! We touched 40+ issues in the last few months and the team submitted patches to the below-mentioned issues. Few of them has been accepted by the maintainer of the issue. 

https://www.drupal.org/project/field_permissions/issues/3042752
https://www.drupal.org/project/header_and_footer_scripts/issues/3050967
https://www.drupal.org/project/login_redirect_to_front/issues/3065756
https://www.drupal.org/project/address/issues/2995992
https://www.drupal.org/project/perfmon/issues/3065862
https://www.drupal.org/project/shorten/issues/3065879
https://www.drupal.org/project/roleassign/issues/3065871
https://www.drupal.org/project/node_title_validation/issues/3065839
https://www.drupal.org/project/perfmon/issues/3065862
https://www.drupal.org/project/entity_clone/issues/3068549
https://www.drupal.org/project/perfmon/issues/3068669
https://www.drupal.org/project/site_settings/issues/3067951
https://www.drupal.org/project/phone_registration/issues/3071779
https://www.drupal.org/project/entity_reference_layout/issues/3071702
https://www.drupal.org/project/paragraphs/issues/2901390
https://www.drupal.org/project/field_permissions/issues/3042752


It’s not the 4 hours game, many of them have been working after that too and will be asking us queries. If you are a Drupal Developer reading this, drop a comment to gain access to the issues list, 

Ohh I forgot to mention, I am new to Drupal and have started feeling the love of the Drupal community, experiencing the true joy of making my minor contribution to this world by organizing this event. We ended up the day with the resolve to continue the tradition every 3rd Thursday of the month. Till then continue working on patches team have picked up which require more time.

Build your CI/CD pipeline with AWS Elasticbeanstack, Codepipeline and Cloudformation

Building an Immutable Infrastructure is the ultimate goal of this solution. Reusability of code for creating a similar environment in a short duration of time and more developer-friendly is another aspect of this solution. AWS Cloudformation is the orchestrator for provisioning and maintaining the infrastructure through infrastructure as code. The entire infrastructure can be created by executing a single template. It will create a nested stack with all dependent resources. The life cycle of each component of a stack can be managed by updating parent stack. It will detect the changes from all nested templates and execute the changesets.

Cloudformation, VPC, EC2, ELB, S3, Autoscaling, AWS Elastic Beanstalk, Code Commit, AWS CodePipeline, SNS, IAM are using here for implementing this solution. AWS Cloudformation is the core component of the infrastructure which maintains the state of all components. Our network infrastructure leverages VPC and its components for building a secure network on top of AWS. A single VPC spans across all availability zones of a region with different subnets to ensure the servers are distributed across availability zones for building a highly available and fault-tolerant infrastructure. We have a different subnet for different tiers of a web application.

Architecture Diagram | Digital Experience

Our application is designed in a two-tier architecture pattern. Application logic is implemented in an EC2 server managed by AWS Elastic Beanstalk and Data tier is implemented in RDS. Both tiers are scalable. For infrastructure administration and maintenance, a Bastion host is deployed in a public subnet. It is a highly secured and created from a prebuilt AMI provided by AWS. It will allow ssh connection only from a trusted IP source. Application servers and Database servers are hosted in private subnets. It can be only accessed from the Bastion host. Servers can be connected only by key pair authentication to avoid vulnerabilities. App server can access the internet through NAT gateway for software installation.

Classic Elastic load balancer is a user-facing component to accept the web requests in our application. This traffic is routed to the backend EC2 servers. Backend server takes care of processing the web request and return the response to ELB which is then consumed by the end-user. ELB is deployed in a public subnet and it is secured by a VPC security group which will allow only http/https inbound traffic from external sources. ELB will only access the back end servers either by http/https protocol. To ensure high availability and uniform distribution of traffic, we have enabled cross-zone load balancing. Apart from that, we have configured the load balancer to support session persistence, maintaining idle timeout between the load balancer and the client.

We use RDS Aurora database as a database tier for the application. It is deployed as a cluster with read/write endpoints. Both servers and database instances are secured by strong security group policy to avoid access from an untrusted network source.

AWS Code commit is a source code repository for this project, It is a highly available, private repository managed by AWS. S3 bucket is used for storing the artifusted network source.

  • AWS Code commit is a source code repository for used network source.
  • AWS Code commit is a source code repository for acts. This artifact is used by AWS codepipeline to deploy it on different environment.

CI/CD pipeline is the core component of this project which builds the code and deploy the changes to the server. We use AWS Codepipeline for building a CI/CD pipeline.

How to create the infrastructure?

Our infrastructure is created and managed by AWS Cloudformation. Before executing the template, please follow the below instructions to create an infrastructure.

Pre-requisites:

  1. CodeCommit Repository with the source code of the application
  2. SNS topic with email subscribers
  3. S3 bucket containing AWS cloudformation templates, Create a folder called "templates" inside a bucket and upload the cloudformation templates into that folder.

Steps:

  1. Log in to the AWS Management Console and select CloudFormation in the Services menu.
  2. Click Create Stack. This is the only option if you have a currently running stack.
  3. Enter the required input parameters and execute the stack. The order of execution of stack is given below. Cloudformation template parses the inputs and resource section of a parent stack. Initially, it will create a network stack for the infrastructure. It includes VPC, Subnet, Routetable, Nacl, Internet Gateway, NAT Gateway, Routing policy. A bastion host is created with an appropriate security group policy. Elastic Beanstalk application will be created for deploying different environments such as dev, staging, and production. Aurora Database cluster will be created in the next step for dev, staging and production environment. DB server has its own security group to control the inbound access.

    It has its own parameter group as well as the config group. Elastic beanstalk application environment will be created for different environments. Here, our runtime is PHP and we have created a configuration group with the required parameters such as load balancer configuration, EC2 auto-scaling configuration, environment variables for application environments. Continuous Integration and Delivery pipe will be created in the last step. It uses code commit as the source and applies the changes to the elastic beanstalk environment whenever there is a change in the source code with manual approval in staging and production environment. Our template will create a required IAM role for the code pipeline project. 
  4. After a few minutes, the stack is available and we can access the services. Initially, Codepipeline releases the changes to the instances hosted in the elastic beanstalk environment.
  5. Access the environment UI and check the application.
  6. Update some changes in the source code, CI job will be triggered within a minute, It will pull the source code from the code commit repo and waiting for a manual approval in staging and prod env to apply the changes to the server, Elastic beanstalk will create new resources and the code is deployed in the environment. Then it will remove the old resource after the successful deployment. This action continues whenever the new version is committed to the repo.

CI/CD pipeline for deploying a PHP application hosted in Elastic beanstalk environment:

Continuous integration (CI) is a software development practice where developers regularly merge their code changes into a central repository, after which automated builds and tests are run. The key goals of CI are to find and address bugs more quickly, improve software quality, and reduce the time it takes to validate and release new software updates. In our case, we have built a CI pipeline using AWS Code Commit and Code Pipeline. It has two-three stages.

Stage1: Source

When the pipeline is triggered by a change in the configured repo branch, the very first step is to download the source code from the server to the workspace where the next set of actions to be done. Here, we have configured the pipeline to pull the specified repository name and branch. If a single-stage has multiple actions, then we can mention run order to execute a particular action in some sequences.

Stage2: Approve

Some projects might not require a build and we can move to the next stage. In our case, it is the approval stage. The project manager can approve the changes to be deployed in the environment or deny the changes. We use SNS for sending a notification to the subscribers to approve the changes. If the action is approved, the pipe will move to the next stage otherwise it will be aborted.

 

 

Stage3: Deploy

Depending upon the approval, the pipeline may or may not reach the deploy stage. During the deploy stage, the code is deployed in all the application environments. Elastic beanstalk deployment strategy high endorses Blue-Green deployment pattern. During deployment, users can access the application with an older version. No changes will be done in the existing servers. Beanstalk creates a new set of resources and applies the changes to the server. After successful deployment, the latest version of the application can be accessed by the users and the old servers are removed.

The basic challenges of implementing CI include more frequent commits to the common codebase, maintaining a single source code repository, automating builds, and automating testing. Additional challenges include testing in similar environments to production, providing visibility of the process to the team, and allowing developers to easily obtain any version of the application.

Continuous delivery (CD) is a software development practice where code changes are automatically built, tested, and prepared for production release. Continuous delivery can be fully automated with a workflow process or partially automated with manual steps at critical points.

With continuous deployment, revisions are deployed to a production environment automatically without explicit approval from a developer, making the entire software release process automated.

Source code is available here[https://github.com/rkkrishnaa/cicd-elasticbeanstack]

Download the Drupal Guide
Enter your email address to receive the guide.
get in touch