Creating Secure API using Node.js, Express Js and Passport-JWT

Node.js is a server-side platform built on Google Chrome's JavaScript Engine (V8 Engine). It is an open-source, cross-platform runtime environment for executing Javascript code outside of the browser. Node.js is used for developing server-side and networking applications.

  1. Steps for the Installation of Node.js  
    1. For Windows Users 
      1. Go to https://nodejs.org/en/download/
      2. Click on windows-installer.
      3. Now click on “continue” for all the popup screens.
      4. Check the node version by running the command
        node -v
        Check the node version by running the command: node -v
    2. For ubuntu/mac users
      1. The first step is to check node version using the below-mentioned commands

        $ node -v
        Output:
        $ v5.0.0
      2. The next step is to check npm version using the below-mentioned commands

        $ npm -v
        Output: 
        $ 4.0.0
      3. If ‘YES’, then go to the next step; and if ‘NOT’, then Remove node by -
        $ sudo apt-get remove --purge nodejs
      4. Now, Install again using
        $ sudo apt-get install curl
      5. Download node package

        $ curl -sL https://deb.nodesource.com/setup_10.x | sudo bash -
        Note: You can use any version instead of 10.x such as 8.x, 6.x
      6. Lets install NodeJS package
        $ sudo apt-get install -y nodejs
        Check node and npm version using the above commands and make sure it is greater than or equal to the given value.
  2. Create your first simple Node.js project

    1. Create the folder “node-project”

    2. Create file “app.js” in that and add the below code.

    3. Run command “node app.js”
      Run command “node app.js”

    4. It will print “Hello! World” in the command line.

3. Creating Secure API using Node.js, Express Js and Passport-JWT

Express Js: Express js is a web application framework for Node.js. It is a third-party library, used for routing. 

Passport-JWT: This module lets you authenticate API endpoints using a JSON web tokens. It is used to secure RESTful endpoints without sessions.

Npm: It is a ‘Node Package Manager’, basically a command-line tool, as well as a registry for third party library, which can add our node applications.

Steps For Creating Secure Node Api  

 1. Create folder ‘ Node-project ‘ and inside the folder run command

 npm init

it will create the package.json file. This file will contain the details about the project like name, author, version, dependencies and GitHub related items etc.
create the package.json file containing the details about the project like name, author, version, dependencies and GitHub related items etc

 

2. Then, Run the command inside of the root folder.

npm install --save express passport passport-local passport-jwt jsonwebtoken 

Then, check the package.json. It will contain all the above modules.

3. Create file “app.js” and include the installed modules in app.js file using require keyword.

4. Create one more folder called “API” inside the root folder and create a file called user.js and add the following code as shown below:

Creation and Storage of JWT : 

Const token = jwt.sign({ userName: response.userName, userId: response.userId }, "secretekey");
  1. When user logins, first we check wheather user exists in our database or not.
  2. If user exists, then create the token (which will be the combination of the user object and secret key). It is JWT(JSON Web Token).
  3.  It will be stored in the client-side (typically local storage).
  4.  Whenever a user requests to access API, we will pass the token to our middleware function to verify the token is valid or not. If it is valid, only then we will allow accessing our API endpoints.

 

5. Now add the lines to our “app.js”

app.get('/login', user.login);

6. Now create one more folder called middleware, and inside middleware folder create file passport.js. In passport.js add the following code.

Here, I am using the passport-jwt strategy. Once the token is stored in the client-end while accessing our API, we call this function and decrypt the token using the “secret key” and again we check whether user exists in our database or not. If it exists, it will return the user object as a response and then it will call our API endpoint. If the user does not exist, then it will show the error “Unauthorised”.

7.  Then include this file to our app.js . i.e

require('./api/middleware/passport')(passport);

8. Create one function and fetch all user data in user.js. Add the below-mentioned code.

9. Then add our middleware passport-jwt in app.js.

passport.authenticate('jwt', {session: false})

10. While accessing our api /userData,it will call the middleware. If the token is valid, only then, it will allow accessing the getAllUsers function in user.js or else it will show the error as “Unauthorised”

Adding Custom field in search results for Decoupled Drupal Architecture

Nowadays, most of the sites we are working on are built on decoupled Drupal approach. A decoupled website opens up multiple opportunities. Along with new opportunities, we also get our fair share of challenges. One was where I was tasked to create the module, which can take input (string & filetype) from the front end framework and result the dataset along with metadata e.g. image, date, content type and  File Type.

Before we move on with how we implemented this, we need to understand how search function operates in Drupal site. It has three main parts - Source, Index and Results.

Source refers to any kind of content we have on the website. We parse the content and store the metadata in the index. And we display the result in the front-end.

In our case, the front end was built using AngularJS.

First, we had to Identify the schema in which all of our source data was stored. 

The required search page had a basic set of features like title, description, taxonomy, link and some extra metadata like image, date, type and file type. Since we search across multiple sites, we also needed information about the source from where the item comes from.

I created a custom module to create API which can be used for content search like a REST resource using _controller POST method. 

Below is a basic module to explain how we can create Search API to be consumed by the external applications.

We would need to create files as per the below structure.

customapi_search/

├── customapi_search.info.yml

├── customapi_search.routing.yml

└── src

    └── Controller

        └── SearchAPIController.php


Step 1.  Create module.info.yml file to define the metadata of the module.

Step 2. Create search routing file customapi_search.routing.yml

Additionally, create a customapi_search.routing.yml file in which we can define our path (endpoint), controller and methods.

Step 3. Create a SearchAPIController.php Controller file in which we can define custom _controller with [POST] resources.

In our case, we have used the controller method as rest API using POST method, which extends ControllerBase class and EntityQuery used for fetch data precept to the POST method param value.

Endpoint of Custom search api: /api/content-search

Json query parameters like:

{"q": "test", "firstResult": 0, "numberOfResults": 1000, "filters": {"type": ["page","pdf",”docx”]}, "sortBy": "latest"}

The above module will provide endpoint

Endpoint response output:

Changing of Cloned Reference values while Cloning the Entity in Drupal 8

Recently, I came across a unique situation in Employee experience portal project that I was working on.

As in any enterprise, there were multiple departments in the company generating content for specific departments with varying levels of access as well as security.

To increase synergy as well as collaboration between departments, they had decided to allow authors to clone content from different departments. This was also to enable them to reutilize the design layout as well as content created by others.
 

We realized that this is not an option available within the Drupal ecosystem. We do have an  Entity Clone module available. But it was not solving our issue. The challenge was that we needed to clone an entity which was having existing references and these values should be changed in cloned entity based on certain conditions e.g. security groups assigned to a specific department.

These references were paragraphs, widgets as well as other custom entity types. If we clone the node using create duplicate function, it creates a duplicate node. But then, we have to attach all the field definition from the original node manually.

Challenge was in the entity clone process

  • Base field definitions are already available from the original content. Original content is referencing to existing entities.
  • While creating the duplicate, we have an ID (of an entity) only which is not saved yet and we are trying to attach that definition to newly created duplicated content.

Because of this, the content was not being saved with the new modified value.

We found a workaround by reviewing the entity clone module process further.

During the Entity clone process, it saves the duplicate node twice

  • At first, it creates an exact duplicate of the original content and saves it. On saving, ID gets created and then, attach all the reference fields.
  • And it saves the 2nd time with all the references of original content.

We have modified the references of cloned content while saving it the 2nd time. And we have implemented necessary business logic to modify the references.

The following snippet will help in understanding the solution.

 

To perform any alter operation, we have to implement hook_presave

$entity gives you clone entity during entity clone process.

$original_content gives you the values from parent content from where we are initiating the clone of the new content.

Now, you can implement your business logic inside hook_presave to modify the cloned node reference field value.

With the above code, we can change the clone reference values while cloning the entity. I would love to learn from others if there are any other ways to implement the same.

How to create Custom Menu based on Specific Roles and Groups in Drupal

I was working on a project recently where we came across a very unique situation.

The Project required menu to be shown based on roles.These roles were tied with groups created earlier by previous developer team. Each department wanted to have complete access of the Drupal menu to configure (add/ edit/delete) along with drag & drop option within the department. This was to be accomplished without giving them admin access of project.

Menu creation needed to have a workflow where once menu has been added, it should be draft stage and must be reviewed & published by the head of the department to make it live. There was a need for adding additional fields (text & image) along with each menu item. This was to highlight the content of certain pages in menu drop-down itself.Addition & updation of menu was expected to happen in Drupal dialog (popup).

Challenges

  1. By default, there is no option in Drupal to create menu as per role. There is a contributed module available Menu per Role, but this can be configured only for roles whereas our need was to make it work with groups too.
  2. Since this is configuration entity (schema file) and not a content entity, no option was available to add extra fields in menu items
  3. By default, Drupal modal dialog opens the custom form in a pop-up and on submission of the form, you have to mention close dialog command to close the popup and submit the form. But requirement was to create new form (while adding up new menu item) without closing the modal.

Because of the above three scenarios, we created a custom module to enable the following functionalities.

    • Department-wise Menu Creation Configuration
    • Option to publish menu in Draft state

How we enabled role & group specific menu configuration

  1. Enable Roles & Groups

We created a custom page to list all the menu items:

  • Create a routing file

  • Fetch roles of current user

  • Fetch groups based on the role of that current user and give access to user based on specific role & groups

  1. How we added extra fields in menu item\

  • Created custom entity

  • Created a custom table to link custom entity ID with menu ID:

Create a schema file for creating custom table in drupal 8

  • On creation of promo, add drupal insert query to add in custom table
     

3) How to create a new form without closing the modal

I added following custom ajax command to achieve this

  • Add UpdateMenuCommand.php file under module_name/src/Ajax folder

  • Then in js file, add like this:

Drupal.AjaxCommands.prototype.updateMenu key name mentioned in UpdateMenuCommand.php file.

Use the same in js file.

By following this procedure, we were able to create role based menu which worked with groups related permission. This can also be used when you need an extra field in the menu.

Feel Free to Contact Us for Your Enterprises

Sajari Search Custom Implementation with Drupal for Better Performance

Google discontinued its Site Search as a Service from April 1, 2017. For one of our clients who was using Google CSE, the team decided to implement the Sajari search which is a high-performance custom search for enterprises.

There is a contributed module already available to use Sajari on Drupal websites. But we know that each additional contributed module adds overhead to Drupal, impacting the performance.

In our case, the client was very keen to have minimal impact on performance. So, we decided to build a lightweight custom module for Sajari integration ensuring appropriate custom search matches.

Sajari team gave us the javascript code snippet for the functionality. They also provided the unique key for our website. Based on our research and the documentation provided, we completed the implementation in the following steps.
 

  • Created the search box using the html and added the class that was mentioned by sajari
  • Our objective was to display the result in a specific page. For this, we added a routing page url in the javascript.
  • Now the target URL will accept the query parameter and will display the result on the page.
  • Sajari provides the option to display results in categorized tabs. This would enable these tabs. 

Note: When we moved our site from http to https, only data from http sites was being displayed. So, again we re-crawled the sajari search. Kudos to Sajari team, we have not seen any downtime so far. We were able to display the content in https too.

Feel free to Contact Us for your Enterprises

Flutter - Fast way to develop iOS and Android apps from a single codebase

Flutter is an open-source application SDK that allows you to build cross-platform (iOS and Android) apps with one programming language and codebase. Since flutter is an SDK it provides tools to compile your code to native machine code.

It is a Framework/Widget Library that gives Re-usable UI building blocks(widgets), utility functions and packages. 

It uses Dart programming language (Developed by Google), focused on front-end user interface development. Dart is an object-oriented and strongly typed programming language and syntax of the dart is a mixture of Java, Javascript, swift and c/c++/c#. 

Why do we need flutter?

You only have to learn or work with one programming language that is Dart, therefore, you will have a single codebase for both iOS and Android application. Since you don't have to build the same interface for iOS and android application, it saves you time.

  • Flutter gives you an experience of native look and feel of mobile applications.
  • It also allows you to develop games and add animations and 2D effects.
  • And the app development will be fast as it allows hot reloading. 

Development Platforms:

To develop a flutter application you will require Flutter SDK just like you need Android SDK to develop android application.

The IDEs you will need to develop flutter application are:

Android Studio: It is needed to run the emulator and Android SDK.

VS Code:  VS code is the editor that you can use to write Dart code. (This is not required when we can write dart code in android studio or Xcode).

Xcode:  Xcode is needed to run the iOS emulator.

Steps to install flutter in Linux:

Install Flutter(Linux)

To install flutter in the system follow the official doc.

Now here are some steps to install and running your first hello world android app with flutter:

After you are done with flutter installation from official docs, just open your Terminal and write

flutter doctor

you must see something like this:

Flutter doctor

Now to create flutter application write below command in you preferred directory(please use Latin letters, digits and underscores in the name of the app otherwise you may face some errors)

flutter create hello_world_app

Now you should see the folder structure of the app like this:

Structure | Valuebound

Your application will be in hello_world_app/lib/main.dart

Point to be noted you will write most  or maybe all of your code in the lib directory

Now you can replace main.dart file’s code with the code given below.


import 'package:flutter/material.dart';

void main() =>
    runApp(MyApp()); // main function is the entry point of the application

class MyApp extends StatelessWidget {
  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      home: Scaffold(
        appBar: AppBar(
          title: Text('HELLO WORLD'),
        ),
        body: Material(
          child: Center(
            child: Text('HELLO WORLD!'),
          ),
        ),
      ),
    );
  }
}

In flutter almost everything is a widget, flutter is full of widgets, it provides a widget for everything like Buttons, input fields, tables, dialogues tab bars and card views and list goes on.

Here in the first line, we have material.dart library imported, it is a rich set of material widgets that are implemented by material design


void main() => runApp(MyApp());

The main function is the very entry point of the application which call the runApp function and that takes MyApp widget and parameter

class MyApp extends StatelessWidget {
  @override
  Widget build(BuildContext context) {...}
}

This is a widget that you will use to build your app, it can either be stateful or stateless.

Stateful widget means which has mutable state and this kind of widget must have the createState() method.

Stateless widget means which does not have an internal state, like some image or some text field, it must have the build() method

Our app does not have a stateful widget as we don't have to change any state of the app

So the internal part is like this
MaterialApp() ⇒ a material design widgets wrapper, 
Material() ⇒  Creates a piece of material
Scaffold() ⇒  Creates a visual scaffold for material design widgets.
AppBar() ⇒  is to create a material design app bar for the app
Center() ⇒  creates a widget to center its child widget
Text() ⇒  is a Text widget 


To run this flutter application:

You will need android or iOS emulator or physical devices connected to run this app

You can you below given commands to run the app

flutter run ==> it will run the app on the connected device or emulator
flutter run -d DEVICE-ID ==> will run on a specific device or emulator
flutter run -d all ==>  will run on all connected devices 

after this, you will see a screen something like this

Hello World |Valuebound

Voila, just now we have build our first application using Flutter. This should be good starting point to develop database driven applications. I have built new application, treLo - road side assistance platform, using Flutter. We released this within one week of time. Would love to hear your feedback and kind of ideas you are working on using Flutter.

Drupal Contribution Hour at Valuebound: 2019

Drupal is gaining adoption in enterprises. Drupal 9, which will be released in June 2020, should speed up the adoption even further. Drupal 8.6 released last year September included major improvements in the layout system, new media features, and better migration support. 

The biggest hindrance to the growth of Drupal is the availability of quality developers. With more and more enterprise companies adopting Drupal for various business application including CMS, intranet, extranet & commerce, there is an increased need for experienced Drupal site builders, developers, and themers. To continue the growth momentum, as a community we need to work towards building a bigger talent pool of good Drupal developers. We need to introduce Drupal to more and more people early in their career. Each and every coder becomes the master in his field by experience, developing technical skills, solving difficult problems, being aggressive to learn and teach others. 

Drupal also provides a unique perspective to build a personal brand for every developer - that also by doing what they are comfortable - coding.  

According to the developers, being an open-source contributor is a key selling point in your professional skills. One of the popular open-source CMS is Drupal. Drupal is a growing platform with 1.37 million members and 114,000 members are actively contributing.

We at Valuebound have taken the advice of Kristen Senz, Researcher at Harvard Business School, by heart. - “Companies that contribute and give back learn how to better use the open-source software in their own environment.” and Organized Drupal Contribution Hour on 25th May, 22nd June, and 18th July along with a host of wannabe Drupal devs as well as mentors where we introduced, how everyone can contribute. We helped them to select issues, setup git and creating patches as well as committing the code. 

Bravo! We touched 40+ issues in the last few months and the team submitted patches to the below-mentioned issues. Few of them has been accepted by the maintainer of the issue. 

https://www.drupal.org/project/field_permissions/issues/3042752
https://www.drupal.org/project/header_and_footer_scripts/issues/3050967
https://www.drupal.org/project/login_redirect_to_front/issues/3065756
https://www.drupal.org/project/address/issues/2995992
https://www.drupal.org/project/perfmon/issues/3065862
https://www.drupal.org/project/shorten/issues/3065879
https://www.drupal.org/project/roleassign/issues/3065871
https://www.drupal.org/project/node_title_validation/issues/3065839
https://www.drupal.org/project/perfmon/issues/3065862
https://www.drupal.org/project/entity_clone/issues/3068549
https://www.drupal.org/project/perfmon/issues/3068669
https://www.drupal.org/project/site_settings/issues/3067951
https://www.drupal.org/project/phone_registration/issues/3071779
https://www.drupal.org/project/entity_reference_layout/issues/3071702
https://www.drupal.org/project/paragraphs/issues/2901390
https://www.drupal.org/project/field_permissions/issues/3042752


It’s not the 4 hours game, many of them have been working after that too and will be asking us queries. If you are a Drupal Developer reading this, drop a comment to gain access to the issues list, 

Ohh I forgot to mention, I am new to Drupal and have started feeling the love of the Drupal community, experiencing the true joy of making my minor contribution to this world by organizing this event. We ended up the day with the resolve to continue the tradition every 3rd Thursday of the month. Till then continue working on patches team have picked up which require more time.

Build your CI/CD pipeline with AWS Elasticbeanstack, Codepipeline and Cloudformation

Building an Immutable Infrastructure is the ultimate goal of this solution. Reusability of code for creating a similar environment in a short duration of time and more developer-friendly is another aspect of this solution. AWS Cloudformation is the orchestrator for provisioning and maintaining the infrastructure through infrastructure as code. The entire infrastructure can be created by executing a single template. It will create a nested stack with all dependent resources. The life cycle of each component of a stack can be managed by updating parent stack. It will detect the changes from all nested templates and execute the changesets.

Cloudformation, VPC, EC2, ELB, S3, Autoscaling, AWS Elastic Beanstalk, Code Commit, AWS CodePipeline, SNS, IAM are using here for implementing this solution. AWS Cloudformation is the core component of the infrastructure which maintains the state of all components. Our network infrastructure leverages VPC and its components for building a secure network on top of AWS. A single VPC spans across all availability zones of a region with different subnets to ensure the servers are distributed across availability zones for building a highly available and fault-tolerant infrastructure. We have a different subnet for different tiers of a web application.

Architecture Diagram | Digital Experience

Our application is designed in a two-tier architecture pattern. Application logic is implemented in an EC2 server managed by AWS Elastic Beanstalk and Data tier is implemented in RDS. Both tiers are scalable. For infrastructure administration and maintenance, a Bastion host is deployed in a public subnet. It is a highly secured and created from a prebuilt AMI provided by AWS. It will allow ssh connection only from a trusted IP source. Application servers and Database servers are hosted in private subnets. It can be only accessed from the Bastion host. Servers can be connected only by key pair authentication to avoid vulnerabilities. App server can access the internet through NAT gateway for software installation.

Classic Elastic load balancer is a user-facing component to accept the web requests in our application. This traffic is routed to the backend EC2 servers. Backend server takes care of processing the web request and return the response to ELB which is then consumed by the end-user. ELB is deployed in a public subnet and it is secured by a VPC security group which will allow only http/https inbound traffic from external sources. ELB will only access the back end servers either by http/https protocol. To ensure high availability and uniform distribution of traffic, we have enabled cross-zone load balancing. Apart from that, we have configured the load balancer to support session persistence, maintaining idle timeout between the load balancer and the client.

We use RDS Aurora database as a database tier for the application. It is deployed as a cluster with read/write endpoints. Both servers and database instances are secured by strong security group policy to avoid access from an untrusted network source.

AWS Code commit is a source code repository for this project, It is a highly available, private repository managed by AWS. S3 bucket is used for storing the artifusted network source.

  • AWS Code commit is a source code repository for used network source.
  • AWS Code commit is a source code repository for acts. This artifact is used by AWS codepipeline to deploy it on different environment.

CI/CD pipeline is the core component of this project which builds the code and deploy the changes to the server. We use AWS Codepipeline for building a CI/CD pipeline.

How to create the infrastructure?

Our infrastructure is created and managed by AWS Cloudformation. Before executing the template, please follow the below instructions to create an infrastructure.

Pre-requisites:

  1. CodeCommit Repository with the source code of the application
  2. SNS topic with email subscribers
  3. S3 bucket containing AWS cloudformation templates, Create a folder called "templates" inside a bucket and upload the cloudformation templates into that folder.

Steps:

  1. Log in to the AWS Management Console and select CloudFormation in the Services menu.
  2. Click Create Stack. This is the only option if you have a currently running stack.
  3. Enter the required input parameters and execute the stack. The order of execution of stack is given below. Cloudformation template parses the inputs and resource section of a parent stack. Initially, it will create a network stack for the infrastructure. It includes VPC, Subnet, Routetable, Nacl, Internet Gateway, NAT Gateway, Routing policy. A bastion host is created with an appropriate security group policy. Elastic Beanstalk application will be created for deploying different environments such as dev, staging, and production. Aurora Database cluster will be created in the next step for dev, staging and production environment. DB server has its own security group to control the inbound access.

    It has its own parameter group as well as the config group. Elastic beanstalk application environment will be created for different environments. Here, our runtime is PHP and we have created a configuration group with the required parameters such as load balancer configuration, EC2 auto-scaling configuration, environment variables for application environments. Continuous Integration and Delivery pipe will be created in the last step. It uses code commit as the source and applies the changes to the elastic beanstalk environment whenever there is a change in the source code with manual approval in staging and production environment. Our template will create a required IAM role for the code pipeline project. 
  4. After a few minutes, the stack is available and we can access the services. Initially, Codepipeline releases the changes to the instances hosted in the elastic beanstalk environment.
  5. Access the environment UI and check the application.
  6. Update some changes in the source code, CI job will be triggered within a minute, It will pull the source code from the code commit repo and waiting for a manual approval in staging and prod env to apply the changes to the server, Elastic beanstalk will create new resources and the code is deployed in the environment. Then it will remove the old resource after the successful deployment. This action continues whenever the new version is committed to the repo.

CI/CD pipeline for deploying a PHP application hosted in Elastic beanstalk environment:

Continuous integration (CI) is a software development practice where developers regularly merge their code changes into a central repository, after which automated builds and tests are run. The key goals of CI are to find and address bugs more quickly, improve software quality, and reduce the time it takes to validate and release new software updates. In our case, we have built a CI pipeline using AWS Code Commit and Code Pipeline. It has two-three stages.

Stage1: Source

When the pipeline is triggered by a change in the configured repo branch, the very first step is to download the source code from the server to the workspace where the next set of actions to be done. Here, we have configured the pipeline to pull the specified repository name and branch. If a single-stage has multiple actions, then we can mention run order to execute a particular action in some sequences.

Stage2: Approve

Some projects might not require a build and we can move to the next stage. In our case, it is the approval stage. The project manager can approve the changes to be deployed in the environment or deny the changes. We use SNS for sending a notification to the subscribers to approve the changes. If the action is approved, the pipe will move to the next stage otherwise it will be aborted.

 

 

Stage3: Deploy

Depending upon the approval, the pipeline may or may not reach the deploy stage. During the deploy stage, the code is deployed in all the application environments. Elastic beanstalk deployment strategy high endorses Blue-Green deployment pattern. During deployment, users can access the application with an older version. No changes will be done in the existing servers. Beanstalk creates a new set of resources and applies the changes to the server. After successful deployment, the latest version of the application can be accessed by the users and the old servers are removed.

The basic challenges of implementing CI include more frequent commits to the common codebase, maintaining a single source code repository, automating builds, and automating testing. Additional challenges include testing in similar environments to production, providing visibility of the process to the team, and allowing developers to easily obtain any version of the application.

Continuous delivery (CD) is a software development practice where code changes are automatically built, tested, and prepared for production release. Continuous delivery can be fully automated with a workflow process or partially automated with manual steps at critical points.

With continuous deployment, revisions are deployed to a production environment automatically without explicit approval from a developer, making the entire software release process automated.

Source code is available here[https://github.com/rkkrishnaa/cicd-elasticbeanstack]

Drupal and Artificial Intelligence for Personalization

We humans as a species want to create a future, where AI will be acting in every aspect of our lives. AI is getting more capable with cognitive abilities similar to humans and these are getting enhanced on a daily basis, thus solving many challenges which were not possible earlier.

When it comes to enterprise open source CMS, “Drupal” is the first name that comes to any one’s mind. Drupal is making inroads into enterprise CMS in faster pace than what was anticipated. Having said that, Drupal is growing at a larger scale and Drupal 9 will be releasing in June 2020. In parallel Artificial Intelligence is one of the technologies that is creating waves across. Combination of AI and CMS technologies is an area that has a lot of potential and is a great way to deliver the benefits of AI to a larger set of people. Drupal + AI will bring a lot of value addition to any organization, 

  • Be it in the form of “Deriving Insights”
  • Web personalization
  • Or a combination of the above two.

On this day of May 19th of Sunday 2019, along with in association of “Valuebound”, hosted a webinar series on “Drupal and AI for Personalization”. Keeping in mind the growing popularity of Drupal and AI together, this event was planned and delivered. Many of coders/contributors may have knowledge of CMS but may have hindrance when Artificial Intelligence is exposed to them. Also, there may be a set of folks who are totally new to both AI and CMS (Drupal). To address all of these types of folks, coders, contributors, this webinar was planned and presented.

During the first session, Me, along with Gokul from ValueBound tried to capture the essence and need of Drupal and AI. Then we spoke about the few essential steps to kickstarting our Drupal + AI journey.

To have the community continue to contribute when Drupal and AI are required, this presentation started with an introduction to the History of AI, followed by definition and various insights explained. I have also introduced Machine Learning and Deep Learning as part of this presentation. In this, I have covered from the basics of neural networks to deep neural networks with explanation and examples.

Note:-Following is the link for the presentation, https://www.slideshare.net/valuebound/drupal-and-artificial-intelligenc…

Drupal provides a variety of features and has edge over other CMS solutions available in the market ranging from the digital experience. Following are the key pointers that were discussed in elaboration.

  • Digital Experience
  • Global community and collaboration
  • Documentation and Web 
  • Innovation and Globalism

By making a bold statement as "AI can emulate human performance by learning from it",     I have started a discussion about Drupal and why now is the time to combine and explore AI/ML + Drupal both. During the session, the following pointers were addressed by meeting the expectations of the audience from all walks of life,

  • Drupal's chat-bot API
  • Web Personalization and recommendation
  • Support of multilingual platform
  • Deriving insights from the content
  • Meeting dynamic business needs

During this, I have covered various AI possibilities around Drupal CMS with some industry insights. Then touched upon a few real-time AI based use-cases, based on industry sector and domains. 

Questions included in this are:

1. What is the enterprise level market share of Drupal in the current scenario?

Drupal is the only player having significant market share in open source CMS for enterprises. Drupal almost covers-up 24-30% of enterprise CMS market share as compared to other competitors.

2. How Drupal handles multilingual capabilities and support?

Drupal supports multilingual capabilities by exposing various API’s to work on translation, Locale, Content translation and internationalization. Starting from Drupal-7, support of various languages has taken-up on priority and addressed with API exposure.

3. How AI will change the future with other technologies?

By automating and including cognitive capabilities into smart web-apps, AI will thrive in providing 

  • Better user experience
  • Better targeting 
  • Better value for money

We would love to hear your feedback. Do add your comments below as well as any questions you might have.

Why Aligning Content To Product Marketing Is Important For Manufacturers

The modern-day economy is symbolized by quite a few factors; such as increasing market saturation, and the presence of cut-throat competition in nearly every niche and industry.

Hence, the need to stand out has led brands to create and publish more content than ever – leading to an overload of information for consumers.

Particularly for manufacturers, the relentless stream of content has made it harder for brands to cut through the noise, and gain the consumers’ attention.

A previous study revealed that consumers are exposed to 5.3 trillion display ads per year – and the number has only risen ever since.

In addition to display ads, the consumer is flooded with information through different channels that marketers use to position and advertise their products.

How Information Overload Impacts Manufacturing Companies

The human brain has evolved to process information, and retain knowledge in a certain way. The speed of technological change is yet to impact the rate at which humans consume information.

In such a data-driven atmosphere, information overload has had a negative impact on businesses around the world.

Stressed Workforce

We are creating far more data than we can put into use – and that’s a well-known fact.

According to Forrester, 60 to 73 percent of all data in an organization goes unused for analytics. Despite the fact that more companies are talking about big data, using technology to gather data, and acknowledging the value of this information – they are unable to get the most out of this data.

However, this has not lowered the expectations of employees; losing a critical piece of information due to the presence of so much data, such as a product description, can affect the entire organization adversely. Coupled with decreasing response times, the information overload has hampered our ability to complete tasks.

For example, research has found that 25% of workers experience significant stress and poor health due to the volume of content that they’re required to process.

If this wasn’t enough, a similar study was conducted across the world where participants from the US, England, Singapore, and Australia described the impact of content overload.

The study reported that a whopping 73 percent of managers stated that their job required them to process a lot of data, and the resulting information overload affected their stress levels.

Overall, it’s quite clear that there is a need for organizations to streamline their stages of content creation and distribution.

One Solution To Your Content Problems: A Content Management System (CMS)

A content management system is one of the best investments that a business can make to solidify their digital presence.

Apart from ensuring great content that works, businesses need to prioritize proper content management to attract their target audience and keep their employees stress-free.

Here are some benefits of using a content management system.

Allows Multiple Users

In a product marketing organization, different people are assigned responsibility for each stage of the content strategy: this includes content creation, publication, deriving insights, and keeping a check on content quality.

Without a proper channel to log in and record your session, it is a continuous struggle for administrators to keep a check on the input provided by content managers.

A CMS not only allows multiple users to access the platform at the same time but also keeps a record of everything that occurs for future reference.

Streamlines Scheduling

Be it product pages, additions to your site or new blogs – a CMS allows you to review updates from your content department in one glance.

Scheduling and continuous check-and-balance of the overall content strategy are the most important tasks for a product manager. Devoid of a CMS, this task becomes much complex.

As product marketing continues to become more integrated, with several communication mediums overlapping one another, streamlined scheduling has become more important than ever. Now, modern product managers need to be aware of the status of all projects in real-time – and this is exactly what a good CMS system allows them to do.

Helps You Manage Content

According to IDC, 71 percent of marketers now create more than 10 times the amount of content they did previously.

The rate at which content is being produced also gives rise to another problem – the pace at which the content is rendered obsolete. As consumers tend to filter through information, they expect to receive data that is currently relevant.

For many product marketing businesses, content management is not just the creation and segregation of different types of digital content – it also includes the ability to remove out of date information.

For example, if you are running a festive promotion for your product (Christmas or Thanksgiving), you need to be prepared to archive the data once the season ends.

Without a CMS in place, this task can take hours’ worth of time as you have to carefully identify and archive all posts about the promotion.

A good CMS has such data grouped in one place, where all menus and links are updated automatically. In other words, the removal of time-sensitive content can be easily done in a few clicks.

You’re The One in Control

To sum it all up, the biggest advantage of a CMS is the absolute control that it lends to product marketing organizations.

Instead of relying on external sources or having a chaotic content feed, a CMS delivers organization, discipline, and uniformity to the process of content creation.

With the right CMS platform, you can update, approve, and deploy content as fast as needed on any scale – without this affecting your performance. In other words, with the help of a CMS, managing content and assets then becomes all about quality, efficiency, and velocity.

The rise of content marketing has been meteoric, to say the least – in fact, modern buyers rely five times more on digital information when making a purchasing decision.

Consider the fact that an average buyer is likely to interact with 10.4 pieces of content before buying a product, and the importance of CMS becomes clear as day.

Download the Drupal Guide
Enter your email address to receive the guide.
get in touch