Best Practices To Enhance Ad Viewability In Digital Advertising

Getting your ad viewed by targeted audience shouldn't be a struggle when you are putting your best efforts on it. Also, it is not even a click-through or a conversion. Still why it is considered a win if half the ad is visible for a second or two.

Ad viewability plays a critical role in the performance of your website and business. These are greatly linked to revenue and increasing it can significantly result in a better RoI. 

For publishers, agencies, and marketers, viewability has always remained a hot topic. A low viewability rating shows that you, the publisher, need to reassess and adapt ad positions. 

Let’s cut through the clutter and discuss some of the best practices that can help publishers enhance ad viewability and increase revenue.

Page length

According to Google, the short form of contents are tend to have higher viewability. Shorter contents are easy to consume if your pages have only a single fold. You can also enable infinite control if you wish to post longer content.

Load speed

Websites filled with ads are naturally associated with a high load time. Google suggests brands should make sure their webpage is loading fast, including ad rendering time. In such circumstances, performance optimization is recommended as both user experience and usability are dependent on the speed of page loads.

Alternatively, leverage Google Pagespeed - speed optimization tool - to analyze and optimize website performance. 

Design responsiveness

Responsive design ensure that ads will adapt to the browser and device used for viewing those ads. Responsive ads not only provides a better user experience but also enhances the viewability. This move can help you improve the viewability of ads and business revenue.

Sidebar content

Responsive templates are a great way to enhance user experience and boost viewability. However, you should know how your sidebar content is presented in each template. In certain scenarios, an ad might not be visible when you amplify the content. Make sure sidebar content has optimum visibility even when users zoom in on your page.

Optimize your viewability

Ad placement matters. Placement of ads just above the fold has proved the best place for the highest viewability rates. Google suggests to place the ad right above the fold and not at the top of the page. Further, the most viewable ad sizes are the vertical size units such as 160x600.

Produce great content

Content is king. While there is various other recommendation on ways to improve viewability, one of the key components is content. Readers love unique and quality content. The quality content decides whether an ad will be seen is if a person is willing to invest his/her time with it. 

Measuring your performance

Well, there is n number of practices that help you to stay ahead of your competition. And measuring your ads performance is one of them. It helps you to see how you are currently performing in terms of viewability. Don’t forget to closely track each metrics. 

Measuring ads performance gives you an opportunity to work and improvise them. Most importantly, there are two metrics you need to look at - placement and creative sizes.

Bottom line: Viewability is a challenge for both buyers and sellers. Optimizing the ads accordingly helps you to bring better results, build a long-term relationship and drive more revenue. Hence, we suggest not to just take our advice but test, learn and let the data drive your decisions. 

Hope you have enjoyed reading the list of viewability tips! Be sure to implement and test each one for the best results. If you want our Drupal team to take a look at your website and provide you with a few personalized insights. Contact us.

Understanding npm in Nodejs

I think npm was one of the reasons for quick adoption of nodejs. As of writing this article there are close 7,00,000 packages on npm. If you want more details about packages across different platforms you can checkout http://www.modulecounts.com/ I know it is comparing apples to organges when comparing packages across different platforms. But at-least it should give you some sense of adoption of node and javascript.

npm package growth

Finding the right node package

Since there are so many packages we have a problem of plenty. For any given scenario we have multiple packages and it becomes difficult to identify the right fit for your use case. I generally look up github repos of popular projects to finalise which package to use. This may not scale up always and need more work.

So I have stuck to using http://npms.io/ for now. It has better search features and also has rating of various packages based on different parameters. You can read the rating logic on https://npms.io/about

For example if you want to use twitter api packages you can search for the same which gives you an output like

Do let me know if there is a curated list of node packages or some help groups which help us identify the right packages.

Using additional features of npm

If you are a node developer I am pretty sure that you have already used npm and you are comfortable with the popular commands npm init andnpm install So let us look at a few other handy commands and features.

Since there are more than 7,00,000 packages in node I wanted to make sure that there was a simple way to keep track of my favourite packages. There seems to be a way but not very user friendly.

Getting started

Create an account on https://www.npmjs.com/

From the interface I didn’t find any option to start my favorite packages. For now looks like we will have to make do with npm cli.

Login on command line with your credentials.

npm login

One you hit the command enter your credentials. Currently it asks for email id which is public. I think npm figures out a way to mask the user email ids. I am not comfortable sharing my email id.

npm login

Once you are logged in, you can checkout if it was successful using the whoami command.

npm whoami

outptu of whoami

Starring a package

npm star axios

Starring a package

If you want a list of packages you have starred then you can use npm stars

npm stars

The command gives you the output like show in the above image.

npm list

Most of the packages in npm have dependencies on other libraries and that is a good thing. It means that packages are modular. For example if you are using axios(https://www.npmjs.com/package/axios) package you can checkout https://www.npmjs.com/package/axios?activeTab=dependenciessee the packages axio is using. If you want to see different packages that are using axios you can checkout https://www.npmjs.com/package/axios?activeTab=dependents

If you want the complete dependency list you can use npm list which gives a tree output like below.

npm list tree view

Most of the times this is overwhelming and the first level packages should be a good enough check.

npm list --depth=0 2>/dev/null

If you use the above command you will get the list of first level packages in your project.

npm list first level

To go global or not

As a rule of thumb I have tried to reduce the number of packages I install globally. It always makes sense to install the packages locally as long as they are related to the project. I only consider installing a package globally if its utility is beyond the project or has nothing to do with the project. You can run the following command to see your list of globally installed packages.

npm list -g --depth=0 2>/dev/null

In my case the output is

npm list global packages

As you can see from the list most of the packages are general purpose and have got nothing to do with individual projects. I am not sure why I installed jshint globally. My atom editor is setup with jshint and I think that should be sufficient. I will spend some time over the weekend to see why I did that.

Security Audit

In latest npm versions if there are any security concerns they get displayed when you run npm install command. But if you want to do an audit of your existing packages run npm audit

npm audit

This command gives you details of vulnerabilities in the package. It gives you details of the path so that you can judge the potential damage if any. If you want more details you can checkout the node security advisory.

You can run a command like npm update fsevents — depth 3 to fix the individual vulnerabilities as suggested or you can run npm audit fix to fix all the vulnerabilities at once like I did.

npm audit fix

NPX

Another problem that I have faced with installing packages globally is that every time I run one of these packages it would have a latest version released. So it kind of doesn’t much sense to install them in the first place. npx comes to your rescue.

To know more about npx read the following article.

Introducing npx: an npm package runner
[You can also read this post in Russian.]medium.com

For example to run mocha on a instance all you need to do is npx mocha Isn’t that cool. The packages you saw on my instance are the ones that I had installed before coming across npx I haven’t installed any packages globally once I started using npx.

Licence crawler

Let us look at one sample use case for using npx While most of the packages on npm are generally under MIT licence, it is better to take a look at the licences of all the packages when you are working on a project for your company.

npx npm-license-crawler

npm licence details

npm, yarn or pnpm

Well npm is not the only option out there. You have yarn and pnpm which are popular alternatives. Yarn was more like a wrapper around npm by facebook for addressing the shortcomings of npm. With competition heating up npm has been quick in implementing the features from yarn. If you are worried about disk space you can use pnpm. If you want a detailed comparison of these three you can checkout https://www.voitanos.io/blog/npm-yarn-pnpm-which-package-manager-should-you-use-for-sharepoint-framework-projects

 

Originally published on https://hackernoon.com/understanding-npm-in-nodejs-fca157586c98

Here's Everything to Know about Ad Viewability

There is a lot of buzz around ad viewability nowadays. In the world of digital marketing, it has become such a hot topic that every firm is looking to build their presence online. But what exactly does it mean to be “viewable”? Does this mean people will look at your ad? Or is it a silver bullet industry is looking for?

Let’s solve this puzzle. In this post, we will discuss everything you need to know about ad viewability such as:

  • What is viewability?
  • Why is viewability important?
  • When is an ad impression not viewable?
  • Best practices to enhance viewability to increase views
  • How to measure viewable impressions?

Let’s begin with the basics, what is ad viewability?

Ad viewability is the concept of showing how visible your ad is on a website and to users. In other words, viewability is an online advertising metrics that tracks only impressions actually seen by users.

Why is viewability important?

High viewability rate indicates a high-quality placement, which can be valuable to advertisers as a Key Performance Indicator (KPI). The ad viewability helps marketers to calculate the relative success (click through ration or CTR) of a campaign by dividing the number of ads served by the number of clicks. 

Also, there are other factors that contribute to an ad not being seen such as users clicking away from a page before the ad loaded or even bots or proxy servers opening pages, rather than live human beings.

When is an ad impression not viewable?

There is an assumption that header spots - placements that appear before the content the user has selected representing the placement with the best viewability. Simultaneously, the ad which is placed somewhere in the middle of the site or near to the relevant part of the content is an attractive alternative. These places represent a good compromise in the ration between price and viewability.

For instance, if an ad is loaded at below the fold (bottom of a webpage) but a reader doesn’t scroll down far to see it then impression will not be considered viewable.

According to Google, the ad should be placed right above the fold and not at the top of the page. The most viewable ad sizes are the vertical size units such as 160x600.

Best practices to enhance viewability to increase views

Much has been talked about the best practices, but it never sums up. Publishers talk about diversifying their revenues. And evidently, some of them have managed to achieve the success particularly those who have focussed on it for a while. 

Being a publishing firm, it's always suggested to start with user persona followed by designing web pages in such a way that the ads load with maximum viewability. 

Try to design a web page where the ad unit will appear “above the fold” or a “sticky ad unit”. Sticky ad units are a type of ad that remains locked in a specific location when the user scrolls. 

Also, develop a mobile-friendly website that resizes according to the device where it is being viewed. Responsive themes ensure a good user experience. 

Another key consideration for ad viewability is speed. Sites that are laden with ads from multiple ad networks can typically take a long time to load. Consider applying techniques that can speed up ad delivery can greatly improve ad viewability.

Here are some of the best practices that can help you to enhance ad viewability and increase Revenue.

Digiday quoted Jim Norton, the former chief business officer at Condé Nast, saying “No single stream of alternative revenue will make up for the declines that we’re seeing in advertising,” said. 

How to measure viewable impressions?

According to the Interactive Advertising Bureau (IAB), an ad which appears at least 50 percent on screen for one second or longer for display ads and two seconds or longer for video ads is considered as a viewable impression.

Bottomline: Buying a "viewable" ad impression does not guarantee that it's going to be seen and/or clicked on. However, there are several other ways you can ensure the chances of your ad being viewed. It’s also important to understand the different metrics that online ad success cannot be determined by views and clicks alone. Furthermore, you need to consider the entire buyer journey.  

If you are a publishing firm looking for experts to integrate the needs of a publishing platform with Drupal we can help. Get in touch.

Everything You Need to Know about Ad Viewability

There is a lot of buzz around ad viewability nowadays. In the world of digital marketing, it has become such a hot topic that every firm is looking to build their presence online. But what exactly does it mean to be “viewable”? Does this mean people will look at your ad? Or is it a silver bullet industry is looking for?

Let’s solve this puzzle. In this post, we will discuss everything you need to know about ad viewability such as:

  • What is viewability?
  • Why is viewability important?
  • When is an ad impression not viewable?
  • Best practices to enhance viewability to increase views
  • How to measure viewable impressions?

Let’s begin with the basics, what is viewability?

Ad viewability is the concept of showing how visible your ad is on a website and to users. In other words, viewability is an online advertising metrics that tracks only impressions actually seen by users.

Why is viewability important?

High viewability rate indicates a high-quality placement, which can be valuable to advertisers as a Key Performance Indicator (KPI). The ad viewability helps marketers to calculate the relative success (click through ration or CTR) of a campaign by dividing the number of ads served by the number of clicks. 

Also, there are other factors that contribute to an ad not being seen such as users clicking away from a page before the ad loaded or even bots or proxy servers opening pages, rather than live human beings.

When is an ad impression not viewable?

There is an assumption that header spots - placements that appear before the content the user has selected representing the placement with the best viewability. Simultaneously, the ad which is placed somewhere in the middle of the site or near to the relevant part of the content is an attractive alternative. These places represent a good compromise in the ration between price and viewability.

For instance, if an ad is loaded at below the fold (bottom of a webpage) but a reader doesn’t scroll down far to see it then impression will not be considered viewable.

According to Google, the ad should be placed right above the fold and not at the top of the page. The most viewable ad sizes are the vertical size units such as 160x600.

Best practices to enhance viewability to increase views

Much has been talked about the best practices, but it never sums up. Publishers talk about diversifying their revenues. And evidently, some of them have managed to achieve the success particularly those who have focussed on it for a while. 

Being a publishing firm, it's always suggested to start with user persona followed by designing web pages in such a way that the ads load with maximum viewability. 

Try to design a web page where the ad unit will appear “above the fold” or a “sticky ad unit”. Sticky ad units are a type of ad that remains locked in a specific location when the user scrolls. 

Also, develop a mobile-friendly website that resizes according to the device where it is being viewed. Responsive themes ensure a good user experience. 

Another key consideration for ad viewability is speed. Sites that are laden with ads from multiple ad networks can typically take a long time to load. Consider applying techniques that can speed up ad delivery can greatly improve ad viewability.

Here are some of the best practices that can help you to enhance ad viewability and increase Revenue.

Digiday quoted Jim Norton, the former chief business officer at Condé Nast, saying “No single stream of alternative revenue will make up for the declines that we’re seeing in advertising,” said. 

How to measure viewable impressions?

According to the Interactive Advertising Bureau (IAB), an ad which appears at least 50 percent on screen for one second or longer for display ads and two seconds or longer for video ads is considered as a viewable impression.

Bottomline: Buying a "viewable" ad impression does not guarantee that it's going to be seen and/or clicked on. However, there are several other ways you can ensure the chances of your ad being viewed. It’s also important to understand the different metrics that online ad success cannot be determined by views and clicks alone. Furthermore, you need to consider the entire buyer journey.  

If you are a publishing firm looking for experts to integrate the needs of a publishing platform with Drupal we can help. Get in touch.

Introduction to Behavior Driven Development

With the advances in technology, automation is playing a key role in software development processes as it enables the team to verify regression test, functionality and run tests simultaneously in most efficient way. Note that technology is no longer stable and continuously evolves. Similarly, web-based applications like Drupal and other frameworks consistently enhance the tools in order to grab the market attention.  Further, automation is when compared to the best choice for web-based applications as verifying and testing the application interfaces are comparatively easier than previous web applications.

Before we talk about Behaviour Driven Development (BDD), let’s have a look at why automated tests.

  • Improves speed
  • Better Test Coverage
  • Better efficiency
  • Boosts Developers & Testers Morale
  • Require less human resources
  • Cost efficient

What is BDD?

BDD is a methodology that is used to develop softwares/projects through example-based communication between developers, QA, Project Managers and business team.

The primary goal of BDD is to improve communication between business team by understanding functional requirements from all the members of development team to avoid ambiguities of requirements. This methodology helps in delivering software that assists in continuous communication, deliberate discovery and test-automation.

Why should we follow BDD methodology?

BDD is an extension of TDD (Test Driven Development). Like in TDD,  in BDD also we test first and add application code. This is easy to describe using ubiquitous language.
Further, BDD follows example based communication process between teams, QA and business clients.

Example Based Communication

This helps business team and developers to clearly understand the clients requirement. BDD is largely facilitated through the use of domain specific language by using natural language constructs.

Gherkin

It represents “Defining behavior” writing features using gherkin language. Behat is a tool to test the behavior of your application which is described in a special language called gherkin especially for behavior descriptions.

Gherkin Structure

Behat is a BDD (Behavior Driven Development) for PHP framework for auto testing your business expectations. Behat is used to check the test cases written in Gherkin structure.

Example structure of Feature File:

Feature structure

Gherkin keywords and its descriptions as follows:                                                                                                                                                        
Feature: This is a descriptive section of what is desired starts the feature and gives it a title. Behat doesn’t parse next 3 lines of text which specifies the context to the people reading your feature.
Scenario: This is something like determinable business situation starts the scenario, it contains description of the scenario. 
Steps: Feature consists of steps known as Givens, Whens and Thens. Behat doesn’t technically differentiate between these three kind of steps.
Each feature file can contain single scenario to test the behavior of our application. Similarly, feature file can have multiple scenarios to test the behavior.
Given: It defines the initial state of the system for the scenario.
When: This describes the action taken by the person/role.
Then: Describes the observable system state after the action has been performed.
And/But: Can be added to create multiples of Given/When/Then lines.

Prerequisites:

PHP higher than 5.3.5
Libraries should install “curl, mbstring, xml” (Behat is a library it can be easily installed by using composer)

Installing Behat

Step 1: Follow the commands in terminal to create composer.json file. Install behat in any path wherever you want... let’s say in root folder path.

composer require behat/mink-extension behat/mink-goutte-driver behat/mink-selenium2-driver

If you want to test javascript testing with Selenium, you can install with selenium2-driver else it is not required to install.

Composer

Start using Behat in your project to call vendor/bin/behat --init. This will setup a features directory in behat directory.

Step 2: Open FeatureContext.php file under /behat/features/bootstrap after that run “init command” and add the below code snippet.

use Behat\Behat\Context\SnippetAcceptingContext; 
use Behat\MinkExtension\Context\MinkContext; 

Step 3: Extend your FeatureContext class with MinkContext and implement with SnippetAcceptingContext and Context.

Step 4: Now create a config file called behat.yml in behat directory

We should specify the base URL to test which instance by behat in behat.yml file.

Behatyml


Goutte driver will act as a bridge between behat and your business application.

Wd_host is nothing but the localhost URL so it can be 127.0.0.1:4444/wd/hub for Selenium integration with behat. 

Note: If you want to test with Selenium integration, you should downgrade your Firefox version to 46. Selenium standalone server version should be 2.52.0 and your Firefox driver should be geckodriver version 0.17.0. Just download the zip file as  it is enough to start the Selenium server.

Currently, Selenium integration successfully working in firefox version 46 with appropriate other Firefox drivers. If you want to test with Firefox means you should change browser_name: firefox in behat.yml file.

In feature file, we should mention “@javascript” before scenario starts then only it will recognize the Selenium server to start browser testing.

Starting Selenium server

Start Selenium server for javascript testing

java -Dwebdriver.GeckoDriver.driver="GeckoDriverdriver" -jar selenium-server-standalone-2.52.0.jar to start selenium server

(or)

java -jar selenium-server-standalone-2.44.0.jar

Don’t forget to specify @javascript in feature file to run Selenium testing.

Create feature file under /behat/features/login.feature

Home feature

Once done with features, scenarios, and steps. Finally, run the feature file in terminal. Path should be your application where installed: vendor/bin/behat this will run all features scripts.

You can also run single feature file like vendor/bin/behat features/file_name.feature.

If you want to run the Selenium javascript testing with slow time monitoring you can do like this (scenario will  be like “And I wait for 2”)
“iWaitFor” function is nothing but a step, which is defined in feature file like “And I wait for 2” number specified as in seconds.

Similarly, I have given example here for triggering “Enter” keyboard button. 

vendor/bin/behat -dl this command will show all the list of behat availability options

Sample output:

sample output

We have covered  the detailed descriptions of  behavior driven development followed up by  example based communication between teams, QA, and business clients. We also touched Gherkin structure, usage of behat tool and installation procedures.  This should have give you the overall idea about javascript automation testing. Feel free to share your experiences and any issue you come across.

Below given is a presentation on "Behavior Driven Development".

4 ways publishing company can increase their site’s ARPU

Media and publishing companies typically run their business based on segmented revenue streams, like advertising, promotions etc. Here revenue stream often reports vertically to ensure the steep rise in global annual turnover. But could it be the right time for the publisher to focus on paid ARPU, subscription and memberships?

Thanks to major content players like Netflix, Amazon, Spotify and the other who have leveraged subscription model and proved users are willing to pay for the quality content. Perhaps, it’s high time when the publishing industry should focus more on fixing their business model and less on ways to further enhance the content production. 

Leading publishers, like The New York Times, Wall Street Journal, Financial Times, The New Yorker are increasingly choosing the subscription path. 

The New York Times in its 2020 report, titled Journalism That Stands Apart, said: 

We are, in the simplest terms, a subscription-first business. Our focus on subscribers sets us apart in crucial ways from many other media organizations. We are not trying to maximize clicks and sell low-margin advertising against them. We are not trying to win a pageviews arms race. We believe that the more sound business strategy for The Times is to provide journalism so strong that several million people around the world are willing to pay for it.

With this, the question arises - how these organizations are increasing their ARPU with paid subscriptions, memberships, events, and lead generation.

First, publishers need to work on their content strategy and user experience to boost audience engagement. Simultaneously, they need to consider their audience base and figure out their goal. They can also think about - what is the unique value proposition you can offer to your client? And how to ensure they are aware of it?

Of course, here, introducing a premium model should be the first step, but more needs to be done if the transition is to be successful. Let’s have a look at four key areas publishing houses can work on.

  • Paid Subscription: Publishers are turning directly to site viewers for paid subscriptions rather than relying on advertising. Typically, a subscription model requires users to pay money to get access to a product or service. It works best when the company provides highly specialized and unique information that readers can’t find anywhere else.
  • Membership: A membership is the notion of belonging. It says nothing of cost or price, though most memberships end up having a cost to them. Membership programs are exclusive as it has tremendous benefits. Being a valuable member gets you access to other members – which may be the thing that is most valued.
  • Events: Hosting events to diversify revenue streams is nothing new. Often event organizers combine their resources showcasing the magazine’s content as part of a unique offering. A planned occasion not only provides an additional revenue stream but also increases subscribers base. According to the American Press Institute, “Incorporating an events strategy with publishing also strengthens a publisher’s brand and bottom line while deepening connections with audiences, sponsors, and advertisers.” 
  • Lead Generation: It typically starts with data collection pushing prospects to landing pages and asking them to fill lead-gen collection forms to get free ebooks, whitepaper, and other resources. These data are used to better understand the leads, engage with them and enhance lead-gen programs. Here, email marketing helps publishers to reach their targeted audience. 

Further, publishers can utilize proven online marketing methods, such as webinars, co-registration, affiliate partnerships, search engine optimization and others.

Are you interested in getting a subscription feature for your own website to provide services to your customers? Not only we help you in keeping track of customers and their subscriptions, but we also handle end-to-end Drupal management services. Get in touch with our experts to find out how you can use the subscription model to increase your ARPU.
 

4 ways publishing enterprises can increase their website’s ARPU

Media and publishing companies typically run their business based on segmented revenue streams, like advertising, promotions etc. Here revenue stream often reports vertically to ensure the steep rise in global annual turnover. But could it be the right time for the publisher to focus on paid subscription and memberships?

Thanks to major content players like Netflix, Amazon, Spotify and the other who have leveraged subscription model and proved users are willing to pay for the quality content. Perhaps, it’s high time when the publishing industry should focus more on fixing their business model and less on ways to further enhance the content production. 

Leading publishers, like The New York Times, Wall Street Journal, Financial Times, The New Yorker are increasingly choosing the subscription path. 

The New York Times in its 2020 report, titled Journalism That Stands Apart, said: 

We are, in the simplest terms, a subscription-first business. Our focus on subscribers sets us apart in crucial ways from many other media organizations. We are not trying to maximize clicks and sell low-margin advertising against them. We are not trying to win a pageviews arms race. We believe that the more sound business strategy for The Times is to provide journalism so strong that several million people around the world are willing to pay for it.

With this, the question arises - how these organizations are increasing their ARPU with paid subscriptions, memberships, events, and lead generation.

First, publishers need to work on their content strategy and user experience to boost audience engagement. Simultaneously, they need to consider their audience base and figure out their goal. They can also think about - what is the unique value proposition you can offer to your client? And how to ensure they are aware of it?

Of course, here, introducing a premium model should be the first step, but more needs to be done if the transition is to be successful. Let’s have a look at four key areas publishing houses can work on.

  • Paid Subscription: Publishers are turning directly to site viewers for paid subscriptions rather than relying on advertising. Typically, a subscription model requires users to pay money to get access to a product or service. It works best when the company provides highly specialized and unique information that readers can’t find anywhere else.
  • Membership: A membership is the notion of belonging. It says nothing of cost or price, though most memberships end up having a cost to them. Membership programs are exclusive as it has tremendous benefits. Being a valuable member gets you access to other members – which may be the thing that is most valued.
  • Events: Hosting events to diversify revenue streams is nothing new. Often event organizers combine their resources showcasing the magazine’s content as part of a unique offering. A planned occasion not only provides an additional revenue stream but also increases subscribers base. According to the American Press Institute, “Incorporating an events strategy with publishing also strengthens a publisher’s brand and bottom line while deepening connections with audiences, sponsors, and advertisers.” 
  • Lead Generation: It typically starts with data collection pushing prospects to landing pages and asking them to fill lead-gen collection forms to get free ebooks, whitepaper, and other resources. These data are used to better understand the leads, engage with them and enhance lead-gen programs. Here, email marketing helps publishers to reach their targeted audience. 

Further, publishers can utilize proven online marketing methods, such as webinars, co-registration, affiliate partnerships, search engine optimization and others.

Are you interested in getting a subscription feature for your own website to provide services to your customers? Not only we help you in keeping track of customers and their subscriptions, but we also handle end-to-end Drupal management services. Get in touch with our experts to find out how you can use the subscription model to increase your ARPU.
 

Integrating Drupal 8 REST API with Highstock

Having a hard time to find out a javascript that can help in displaying the stock and timeline charts on your web/mobile application. Recently, I was working on a Drupal project where clients requirement was to add a similar feature to their web application. While doing secondary research our team came across Highstock - a javascript library - that allows you to create general timeline charts and insert them on the website.

Have a look at what exactly is Highstock?

Highstock helps in displaying the stock and timeline charts for web/mobile application based on certain data. Highstock chart offers a wide range of features like basic navigator series, date range, date picker, scrolling bar. Still wondering how to use this feature to its fullest - integrate Drupal 8 Rest API with Highstock javascript library.

Integrating Drupal 8 REST API with Highstock javascript library.

Step 1: Create a custom module. In my case, I will be creating a module name Highstock.

Step 2: Create a highstock.info.yml file.

Step 3: Create highstock.libraries.yml file to add highstock library.

Step 4: Create Rest API Resource, which provides the input for the chart.

Highstock accept the input in the following format: It requires the array structure, within that add x-axis, y-axis data with comma separated. So while creating REST API we need to generate the output in the following format.

[
[1297987200000,204011724],
[1298332800000,218135561],
[1298419200000,167962942],
[1298505600000,124974514],
[1298592000000,95004483],
[1298851200000,100768479]
]

Step 4.1: In Drupal 8, create HighstockChart.php file inside /src/Plugin/rest/resource.

Step 5: Create a highstock_chart.js file to integrate REST API output with a highstock library.

Highstock library provides various types of charts like single line series, line with marker and shadow, spline, step line, area spline etc. You can find the types of charts here https://www.highcharts.com/stock/demo.

In js, we have to call the API which gives you the JSON output. Based on type added for the chart output will be shown.

Step 6: Create a block HighstockChartBlock.php to show the chart.

Place the above block in any of your desired regions and it will display the chart like below:

Highstock Chart

Default JS provides the following properties:

  • Range selector

Range selector
  • Data range

Date range

​​​​​​

  • Scrollbar at the bottom.

Scrollbar
  • Right side menu icon will provide an option to Print chart, download chart in PNG, JPEG, SVG, and PDF format.

Menu bar
  • Mouse Hovering will give Marker with x-axis and y-axis highlighted.

Mouse hover chart

 

Properties of Highstock Chart:

Highstock javascript library provides several properties and methods to configure the chart. All these configurations can be found in highstock API reference. https://api.highcharts.com/highstock/.

We can modify charts using those properties. I have referred to the above link and configured my charts as mentioned below:

We must add the above properties in highstock_chart.js of your custom module. After applying all properties chart will look similar to the below image.

Final Chart

 

This API is very handy when it comes to representing Complex Data structures to end users in the form of colorful charts. You should definitely pitch this one to the clients if they are still showing Data in traditional Tables, Excel sheet, etc. Hope now you can easily integrate Drupal 8 Rest API with Highstock. If you have any suggestions or queries please comment down let me try to answer.

Understanding async-await in Javascript

Async and Await are extensions of promises. So if you are not clear about the basics of promises please get comfortable with promises before reading further. You can read my post on Understanding Promises in Javascript.

I am sure that many of you would be using async and await already. But I think it deserves a little more attention. Here is a small test : If you can’t spot the problem with the below code then read on.

for (name of ["nkgokul", "BrendanEich", "gaearon"]) {
    userDetails = await fetch("https://api.github.com/users/" + name);
    userDetailsJSON = await userDetails.json();
    console.log("userDetailsJSON", userDetailsJSON);
  }

We will revisit the above code block later, once we have gone through async await basics. Like always Mozilla docs is your friend. Especially checkout the definitions.

async and await

From MDN

An asynchronous function is a function which operates asynchronously via the event loop, using an implicit Promise to return its result. But the syntax and structure of your code using async functions is much more like using standard synchronous functions.

I wonder who writes these descriptions. They are so concise and well articulated. To break it down.

  1. The function operates asynchronously via event loop.
  2. It uses an implicit Promise to return the result.
  3. The syntax and structure of the code is similar to writing synchronous functions.

And MDN goes on to say

An async function can contain an await expression that pauses the execution of the async function and waits for the passed Promise's resolution, and then resumes the async function's execution and returns the resolved value. Remember, the await keyword is only valid inside async functions.

Let us jump into code to understand this better. We will reuse the three function we used for understanding promises here as well.

A function that returns a promise which resolves or rejects after n number of seconds.

var promiseTRRARNOSG = (promiseThatResolvesRandomlyAfterRandomNumnberOfSecondsGenerator = function() {
  return new Promise(function(resolve, reject) {
    let randomNumberOfSeconds = getRandomNumber(2, 10);
    setTimeout(function() {
      let randomiseResolving = getRandomNumber(1, 10);
      if (randomiseResolving > 5) {
        resolve({
          randomNumberOfSeconds: randomNumberOfSeconds,
          randomiseResolving: randomiseResolving
        });
      } else {
        reject({
          randomNumberOfSeconds: randomNumberOfSeconds,
          randomiseResolving: randomiseResolving
        });
      }
    }, randomNumberOfSeconds * 1000);
  });
});

Two more deterministic functions one which resolve after n seconds and another which rejects after n seconds.

var promiseTRSANSG = (promiseThatResolvesAfterNSecondsGenerator = function(
  n = 0
) {
  return new Promise(function(resolve, reject) {
    setTimeout(function() {
      resolve({
        resolvedAfterNSeconds: n
      });
    }, n * 1000);
  });
});
var promiseTRJANSG = (promiseThatRejectsAfterNSecondsGenerator = function(
  n = 0
) {
  return new Promise(function(resolve, reject) {
    setTimeout(function() {
      reject({
        rejectedAfterNSeconds: n
      });
    }, n * 1000);
  });
});

Since all these three functions are returning promises we can also call these functions as asynchronous functions. See we wrote asyn functions even before knowing about them.

If we had to use the function promiseTRSANSG using standard format of promises we would have written something like this.

var promise1 = promiseTRSANSG(3);
promise1.then(function(result) {
  console.log(result);
});
promise1.catch(function(reason) {
  console.log(reason);
});

There is a lot of unnecessary code here like anonymous function just for assigning the handlers. What async await does is it improves the syntax for this which would make seem more like synchronous code. If we had to the same as above in async await format it would be like

result = await promiseTRSANSG(3);
console.log(result);

Well that look much more readable than the standard promise syntax. When we used await the execution of the code was blocked. That is the reason that you had the value of the promise resolution in the variable result. As you can make out from the above code sample, instead of the .then part the result is assigned to the variable directly when you use await You can also make out that the .catch part is not present here. That is because that is handled using try catch error handling. So instead of using promiseTRSANSlet us use promiseTRRARNOSG Since this function can either resolve or reject we need to handle both the scenarios. In the above code we just wrote two lines to give you an easy comparison between the standard format and async await format. The example in next section gives you a better idea of the format and structure.

General syntax of using async await

async function testAsync() {
  for (var i = 0; i < 5; i++) {
    try {
      result1 = await promiseTRRARNOSG();
      console.log("Result 1 ", result1);
      result2 = await promiseTRRARNOSG();
      console.log("Result 2 ", result2);
    } catch (e) {
      console.log("Error", e);
    } finally {
      console.log("This is done");
    }
  }
}
test();

From the above code example you can make out that instead of using the promise specific error handling we are using the more generic approach of using try catch for error handling. So that is one thing less for us to remember and it also improves the overall readability even after considering the try catch block around our code. So based on the level of error handling you need you can add any number of catch blocks and make the error messages more specific and meaningful.

Pitfalls of using async and await

async await makes it much more easier to use promises. Developers from synchronous programming background will feel at home while using asyncand await. This should also alert us, as this also means that we are moving towards a more synchronous approach if we don’t keep a watch.

The whole point of javascript/nodejs is to think asynchronous by default and not an after though. async await generally means you are doing things in sequential way. So make a conscious decision whenever you want to use to async await.

Now let us start analysing the code that I flashed at your face in the beginning.

for (name of ["nkgokul", "BrendanEich", "gaearon"]) {
    userDetails = await fetch("https://api.github.com/users/" + name);
    userDetailsJSON = await userDetails.json();
    console.log("userDetailsJSON", userDetailsJSON);
  }

This seems like a harmless piece of code that fetches the github details of three users “nkgokul”, “BrendanEich”, “gaearon” Right. That is true. That is what this function does. But it also has some unintended consequences.

Before diving further into the code let us build a simple timer.

startTime = performance.now();  //Run at the beginning of the code
function executingAt() {
  return (performance.now() - startTime) / 1000;
}

Now we can use executingAt wherever we want to print the number of seconds that have surpassed since the beginning.

async function fetchUserDetailsWithStats() {
  i = 0;
  for (name of ["nkgokul", "BrendanEich", "gaearon"]) {
    i++;
    console.log("Starting API call " + i + " at " + executingAt());
    userDetails = await fetch("https://api.github.com/users/" + name);
    userDetailsJSON = await userDetails.json();
    console.log("Finished API call " + i + "at " + executingAt());
    console.log("userDetailsJSON", userDetailsJSON);
  }
}

Checkout the output of the same.

async-await analysed

As you can find from the output, each of the await function is called after the previous function was completed. We are trying to fetch the details of three different users“nkgokul”, “BrendanEich”, “gaearon” It is pretty obvious that output of one API call is in noway dependent on the output of the others.

The only dependence we have is these two lines of code.

userDetails = await fetch("https://api.github.com/users/" + name);
userDetailsJSON = await userDetails.json();

We can create the userDetailsJSON object only after getting the userDetails. Hence it makes sense to use await here that is within the scope of getting the details of a single user. So let us make an async for getting the details of the single user.

async function fetchSingleUsersDetailsWithStats(name) {
  console.log("Starting API call for " + name + " at " + executingAt());
  userDetails = await fetch("https://api.github.com/users/" + name);
  userDetailsJSON = await userDetails.json();
  console.log("Finished API call for " + name + " at " + executingAt());
  return userDetailsJSON;
}

Now that the fetchSingleUsersDetailsWithStats is async we can use this function to fetch the details of the different users in parallel.

async function fetchAllUsersDetailsParallelyWithStats() {
  let singleUsersDetailsPromises = [];
  for (name of ["nkgokul", "BrendanEich", "gaearon"]) {
    let promise = fetchSingleUsersDetailsWithStats(name);
    console.log(
      "Created Promise for API call of " + name + " at " + executingAt()
    );
    singleUsersDetailsPromises.push(promise);
  }
  console.log("Finished adding all promises at " + executingAt());
  let allUsersDetails = await Promise.all(singleUsersDetailsPromises);
  console.log("Got the results for all promises at " + executingAt());
  console.log(allUsersDetails);
}

When you want to run things in parallel, the thumb rule that I follow is

Create a promise for each async call. Add all the promises to an array. Then pass the promises array to Promise.all This in turn returns a single promise for which we can use await

When we put all of this together we get

startTime = performance.now();
async function fetchAllUsersDetailsParallelyWithStats() {
  let singleUsersDetailsPromises = [];
  for (name of ["nkgokul", "BrendanEich", "gaearon"]) {
    let promise = fetchSingleUsersDetailsWithStats(name);
    console.log(
      "Created Promise for API call of " + name + " at " + executingAt()
    );
    singleUsersDetailsPromises.push(promise);
  }
  console.log("Finished adding all promises at " + executingAt());
  let allUsersDetails = await Promise.all(singleUsersDetailsPromises);
  console.log("Got the results for all promises at " + executingAt());
  console.log(allUsersDetails);
}
async function fetchSingleUsersDetailsWithStats(name) {
  console.log("Starting API call for " + name + " at " + executingAt());
  userDetails = await fetch("https://api.github.com/users/" + name);
  userDetailsJSON = await userDetails.json();
  console.log("Finished API call for " + name + " at " + executingAt());
  return userDetailsJSON;
}
fetchAllUsersDetailsParallelyWithStats();

The output for this is

Promises run in parallel with timestamps

As you can make out from the output, promise creations are almost instantaneous whereas API calls take some time. We need to stress this as time taken for promises creation and processing is trivial when compared to IO operations. So while choosing a promise library it makes more sense to choose a library that is feature rich and has better dev experience. Since we are using Promise.all all the API calls were run in parallel. Each API is taking almost 0.88 seconds. But since they are called in parallel we were able to get the results of all API calls in 0.89 seconds.

In most of the scenarios understanding this much should serve us well. You can skip to Thumb Rules section. But if you want to dig deeper read on.

Digging deeper into await

For this let us pretty much limit ourselves to promiseTRSANSG function. The outcome of this function is more deterministic and will help us identify the differences.

Sequential Execution

startTime = performance.now();
var sequential = async function() {
  console.log(executingAt());
  const resolveAfter3seconds = await promiseTRSANSG(3);
  console.log("resolveAfter3seconds", resolveAfter3seconds);
  console.log(executingAt());
  const resolveAfter4seconds = await promiseTRSANSG(4);
  console.log("resolveAfter4seconds", resolveAfter4seconds);
  end = executingAt();
  console.log(end);
}
sequential();

Sequential Execution

Parallel Execution using Promise.all

var parallel = async function() {
  startTime = performance.now();
  promisesArray = [];
  console.log(executingAt());
  promisesArray.push(promiseTRSANSG(3));
  promisesArray.push(promiseTRSANSG(4));
  result = await Promise.all(promisesArray);
  console.log(result);
  console.log(executingAt());
}
parallel();

Parallel execution using promises

Concurrent Start of Execution

asynchronous execution starts as soon as the promise is created. await just blocks the code within the async function until the promise is resolved. Let us create a function which will help us clearly understand this.

var concurrent = async function() {
  startTime = performance.now();
  const resolveAfter3seconds = promiseTRSANSG(3);
  console.log("Promise for resolveAfter3seconds created at ", executingAt());
  const resolveAfter4seconds = promiseTRSANSG(4);
  console.log("Promise for resolveAfter4seconds created at ", executingAt());
resolveAfter3seconds.then(function(){
    console.log("resolveAfter3seconds resolved at ", executingAt());
  });
resolveAfter4seconds.then(function(){
    console.log("resolveAfter4seconds resolved at ", executingAt());
  });
  console.log(await resolveAfter4seconds);
  console.log("await resolveAfter4seconds executed at ", executingAt());
  console.log(await resolveAfter3seconds); 
  console.log("await resolveAfter3seconds executed at ", executingAt());
};
concurrent();

Concurrent start and then await

From previous post we know that .then is even driven. That is .then is executed as soon as the promise is resolved. So let us use resolveAfter3seconds.thenand resolveAfter4seconds.then to identify when our promises are actually resolved. From the output we can see that resolveAfter3seconds is resolved after 3 seconds and resolveAfter4secondsis executed after 4 seconds. This is as expected.

Now to check how await affects the execution of code we have used

console.log(await resolveAfter4seconds);
console.log(await resolveAfter3seconds);

As we have seen from the output of .then resolveAfter3seconds resolved one second before resolveAfter4seconds . But we have the await for resolveAfter4seconds and then followed by await for resolveAfter3seconds

From the output we can see that though resolveAfter3seconds was already resolved it got printed only after the output of console.log(await resolveAfter4seconds); was printed. Which reiterates what we had said earlier. await only blocks the execution of next lines of code in asyncfunction and doesn’t affect the promise execution.

Disclaimer

MDN documentation mentions that Promise.all is still serial and using .then is truly parallel. I have not been able to understand the difference and would love to hear back if anybody has figured out their heard around the difference.

Thumb Rules

Here are a list of thumb rules I use to keep my head sane around using asyncand await

  1. aync functions returns a promise.
  2. async functions use an implicit Promise to return its result. Even if you don’t return a promise explicitly async function makes sure that your code is passed through a promise.
  3. await blocks the code execution within the async function, of which it(await statement) is a part.
  4. There can be multiple await statements within a single async function.
  5. When using async await make sure to use try catch for error handling.
  6. If your code contains blocking code it is better to make it an asyncfunction. By doing this you are making sure that somebody else can use your function asynchronously.
  7. By making async functions out of blocking code, you are enabling the user who will call your function to decide on the level of asynhronicity he wants.
  8. Be extra careful when using await within loops and iterators. You might fall into the trap of writing sequentially executing code when it could have been easily done in parallel.
  9. await is always for a single promise. If you want to await multiple promises(Run this promises in parallel) create an array of promises and then pass it to the Promise.all function.
  10. Promise creation starts the execution of asynchronous functionality.
  11. await only blocks the code execution within the async function. It only makes sure that next line is executed when the promise resolves. So if an asynchronous activity has already started then await will not have an effect on it.

Please point out if I am missing something here or if something can be improved.

Originally published on https://hackernoon.com/understanding-async-await-in-javascript-1d81bb07…

6 Best Practices To Safeguard Your Drupal 8 Website

The last few months have been quite challenging for media & publishing enterprises dealing with EU’s new data privacy law - GDPR and Drupal highly critical vulnerability - DrupalGeddon 2.  

On 28 March, Drupal announced the alerts about DrupalGeddon 2 (SA-CORE-2018-002 / CVE-2018-7600) - which was later patched by the security team. The vulnerability was potential enough to affect the vast majority of Drupal 6, 7 and 8 websites. 

Earlier in October 2014, Drupal faced similar vulnerability - tagged as DrupalGeddon. At that time, the security patch was released within seven hours of the critical security update. 

So here the question is - how vulnerable is Drupal?

Just like any other major framework out there, there exists security danger on Drupal as well. However, Drupal is a more secure platform when compared to its peers. Learn more about “safety concerns in an e-commerce site and how Drupal is addressing it”.

In short, we can’t specify exactly how vulnerable is Drupal as it entirely depends on the context. Possibly, you will find the answer to this question in one of our previous post where we talked about “Drupal Security Advisor Data”.

Implement these measures to secure your Drupal website

1. Upgrade to the latest version of Drupal

Whether it is your operating system, antivirus or Drupal itself, running the latest version is always suggested. And this is the least you can and should do to protect your website. 

The updates not only bring new features but also enhances security. Further, you should also keep updating modules as it is most often the cause of misery. It's always recommended to check for the update report and keep updating at regular interval. The latest version is Drupal 8.3.1.

Note that it is older versions of CMS that hackers usually target as they are more vulnerable.

2. Remove unnecessary modules

Agreed that the modules play a critical role in enhancing user experience. However, you should be wary of downloading as it increases vulnerability. Also, ensure that the module has a sizable number of downloads. 

In case even if some vulnerability occurs, it will be resolved quickly by the community as it can affect a major chunk of companies/individuals. Furthermore, you can remove/uncheck the unused modules or completely uninstall it.

3. Practice strong user management

In a typical organization, several individuals require access to the website to manage different areas within it. These users are potential enough to be a reason for security breach so it is important to keep control of their permissions. 

Give limited access to the site, instead of giving access to the whole site by default. And when the user leaves the organization they should be promptly removed from the administrator list to eliminate any unnecessary risk. Read on for a quick review “managing user roles & permission in Drupal 8”.

4. Choose a proper hosting provider

It's always a dilemma to figure out - which hosting provider should we trust for our website? Not to mention hosting provider plays a key role in ensuring the security of the website. Look for a hosting provider, which offers a security-first Drupal hosting solution with all the server side security measure like SSL.

5. Enable HTTPS

As a core member of the development team/business owner/decision makers, it's your responsibility to take the ownership of the security of your enterprise website. 

Consider performing a check for common vulnerabilities at regular interval as it will allow you to make quick work of those holes by following the prompts. Here is what Drupal experts have to say about "securing users private data from unauthorized access".

6. Backup regularly

Plan for the worst. Keep your codebase and database handy. There can be a number of reasons, both accidental and intentional, that can destroy your hard work. Here is the list of reasons why you should regularly backup your website. 

  • General safety
  • The original version of your site has aged
  • Respond quickly if your site is hacked
  • Updates went wrong 

To sum up, you need to follow the above-mentioned steps in order to secure your Drupal website. Also, reporting a security breach to the Drupal community can be an effective way to patch the issue and seek help from the community to avoid massive risk.

Now go ahead and secure your Drupal website!
 

Download the Drupal Guide
Enter your email address to receive the guide.
get in touch