I think npm was one of the reasons for quick adoption of nodejs. As of writing this article there are close 7,00,000 packages on npm. If you want more details about packages across different platforms you can checkout http://www.modulecounts.com/ I know it is comparing apples to organges when comparing packages across different platforms. But at-least it should give you some sense of adoption of node and javascript.
npm package growth
Finding the right node package
Since there are so many packages we have a problem of plenty. For any given scenario we have multiple packages and it becomes difficult to identify the right fit for your use case. I generally look up github repos of popular projects to finalise which package to use. This may not scale up always and need more work.
So I have stuck to using http://npms.io/ for now. It has better search features and also has rating of various packages based on different parameters. You can read the rating logic on https://npms.io/about
For example if you want to use twitter api packages you can search for the same which gives you an output like
Do let me know if there is a curated list of node packages or some help groups which help us identify the right packages.
Using additional features of npm
If you are a node developer I am pretty sure that you have already used npm and you are comfortable with the popular commands npm init andnpm install So let us look at a few other handy commands and features.
Since there are more than 7,00,000 packages in node I wanted to make sure that there was a simple way to keep track of my favourite packages. There seems to be a way but not very user friendly.
From the interface I didn’t find any option to start my favorite packages. For now looks like we will have to make do with npm cli.
Login on command line with your credentials.
npm login
One you hit the command enter your credentials. Currently it asks for email id which is public. I think npm figures out a way to mask the user email ids. I am not comfortable sharing my email id.
npm login
Once you are logged in, you can checkout if it was successful using the whoami command.
npm whoami
outptu of whoami
Starring a package
npm star axios
Starring a package
If you want a list of packages you have starred then you can use npm stars
npm stars
The command gives you the output like show in the above image.
If you want the complete dependency list you can use npm list which gives a tree output like below.
npm list tree view
Most of the times this is overwhelming and the first level packages should be a good enough check.
npm list --depth=0 2>/dev/null
If you use the above command you will get the list of first level packages in your project.
npm list first level
To go global or not
As a rule of thumb I have tried to reduce the number of packages I install globally. It always makes sense to install the packages locally as long as they are related to the project. I only consider installing a package globally if its utility is beyond the project or has nothing to do with the project. You can run the following command to see your list of globally installed packages.
npm list -g --depth=0 2>/dev/null
In my case the output is
npm list global packages
As you can see from the list most of the packages are general purpose and have got nothing to do with individual projects. I am not sure why I installed jshint globally. My atom editor is setup with jshint and I think that should be sufficient. I will spend some time over the weekend to see why I did that.
Security Audit
In latest npm versions if there are any security concerns they get displayed when you run npm install command. But if you want to do an audit of your existing packages run npm audit
npm audit
This command gives you details of vulnerabilities in the package. It gives you details of the path so that you can judge the potential damage if any. If you want more details you can checkout the node security advisory.
You can run a command like npm update fsevents — depth 3 to fix the individual vulnerabilities as suggested or you can run npm audit fix to fix all the vulnerabilities at once like I did.
npm audit fix
NPX
Another problem that I have faced with installing packages globally is that every time I run one of these packages it would have a latest version released. So it kind of doesn’t much sense to install them in the first place. npx comes to your rescue.
To know more about npx read the following article.
For example to run mocha on a instance all you need to do is npx mocha Isn’t that cool. The packages you saw on my instance are the ones that I had installed before coming across npx I haven’t installed any packages globally once I started using npx.
Licence crawler
Let us look at one sample use case for using npx While most of the packages on npm are generally under MIT licence, it is better to take a look at the licences of all the packages when you are working on a project for your company.
npx npm-license-crawler
npm licence details
npm, yarn or pnpm
Well npm is not the only option out there. You have yarn and pnpm which are popular alternatives. Yarn was more like a wrapper around npm by facebook for addressing the shortcomings of npm. With competition heating up npm has been quick in implementing the features from yarn. If you are worried about disk space you can use pnpm. If you want a detailed comparison of these three you can checkout https://www.voitanos.io/blog/npm-yarn-pnpm-which-package-manager-should-you-use-for-sharepoint-framework-projects
There is a lot of buzz around ad viewability nowadays. In the world of digital marketing, it has become such a hot topic that every firm is looking to build their presence online. But what exactly does it mean to be “viewable”? Does this mean people will look at your ad? Or is it a silver bullet industry is looking for?
Let’s solve this puzzle. In this post, we will discuss everything you need to know about ad viewability such as:
What is viewability?
Why is viewability important?
When is an ad impression not viewable?
Best practices to enhance viewability to increase views
How to measure viewable impressions?
Let’s begin with the basics, what is ad viewability?
Ad viewability is the concept of showing how visible your ad is on a website and to users. In other words, viewability is an online advertising metrics that tracks only impressions actually seen by users.
Why is viewability important?
High viewability rate indicates a high-quality placement, which can be valuable to advertisers as a Key Performance Indicator (KPI). The ad viewability helps marketers to calculate the relative success (click through ration or CTR) of a campaign by dividing the number of ads served by the number of clicks.
Also, there are other factors that contribute to an ad not being seen such as users clicking away from a page before the ad loaded or even bots or proxy servers opening pages, rather than live human beings.
When is an ad impression not viewable?
There is an assumption that header spots - placements that appear before the content the user has selected representing the placement with the best viewability. Simultaneously, the ad which is placed somewhere in the middle of the site or near to the relevant part of the content is an attractive alternative. These places represent a good compromise in the ration between price and viewability.
For instance, if an ad is loaded at below the fold (bottom of a webpage) but a reader doesn’t scroll down far to see it then impression will not be considered viewable.
According to Google, the ad should be placed right above the fold and not at the top of the page. The most viewable ad sizes are the vertical size units such as 160x600.
Best practices to enhance viewability to increase views
Much has been talked about the best practices, but it never sums up. Publishers talk about diversifying their revenues. And evidently, some of them have managed to achieve the success particularly those who have focussed on it for a while.
Being a publishing firm, it's always suggested to start with user persona followed by designing web pages in such a way that the ads load with maximum viewability.
Try to design a web page where the ad unit will appear “above the fold” or a “sticky ad unit”. Sticky ad units are a type of ad that remains locked in a specific location when the user scrolls.
Another key consideration for ad viewability is speed. Sites that are laden with ads from multiple ad networks can typically take a long time to load. Consider applying techniques that can speed up ad delivery can greatly improve ad viewability.
Here are some of the best practices that can help you to enhance ad viewability and increase Revenue.
Digiday quoted Jim Norton, the former chief business officer at Condé Nast, saying “No single stream of alternative revenue will make up for the declines that we’re seeing in advertising,” said.
How to measure viewable impressions?
According to the Interactive Advertising Bureau (IAB), an ad which appears at least 50 percent on screen for one second or longer for display ads and two seconds or longer for video ads is considered as a viewable impression.
Bottomline: Buying a "viewable" ad impression does not guarantee that it's going to be seen and/or clicked on. However, there are several other ways you can ensure the chances of your ad being viewed. It’s also important to understand the different metrics that online ad success cannot be determined by views and clicks alone. Furthermore, you need to consider the entire buyer journey.
If you are a publishing firm looking for experts to integrate the needs of a publishing platform with Drupal we can help. Get in touch.
There is a lot of buzz around ad viewability nowadays. In the world of digital marketing, it has become such a hot topic that every firm is looking to build their presence online. But what exactly does it mean to be “viewable”? Does this mean people will look at your ad? Or is it a silver bullet industry is looking for?
Let’s solve this puzzle. In this post, we will discuss everything you need to know about ad viewability such as:
What is viewability?
Why is viewability important?
When is an ad impression not viewable?
Best practices to enhance viewability to increase views
How to measure viewable impressions?
Let’s begin with the basics, what is viewability?
Ad viewability is the concept of showing how visible your ad is on a website and to users. In other words, viewability is an online advertising metrics that tracks only impressions actually seen by users.
Why is viewability important?
High viewability rate indicates a high-quality placement, which can be valuable to advertisers as a Key Performance Indicator (KPI). The ad viewability helps marketers to calculate the relative success (click through ration or CTR) of a campaign by dividing the number of ads served by the number of clicks.
Also, there are other factors that contribute to an ad not being seen such as users clicking away from a page before the ad loaded or even bots or proxy servers opening pages, rather than live human beings.
When is an ad impression not viewable?
There is an assumption that header spots - placements that appear before the content the user has selected representing the placement with the best viewability. Simultaneously, the ad which is placed somewhere in the middle of the site or near to the relevant part of the content is an attractive alternative. These places represent a good compromise in the ration between price and viewability.
For instance, if an ad is loaded at below the fold (bottom of a webpage) but a reader doesn’t scroll down far to see it then impression will not be considered viewable.
According to Google, the ad should be placed right above the fold and not at the top of the page. The most viewable ad sizes are the vertical size units such as 160x600.
Best practices to enhance viewability to increase views
Much has been talked about the best practices, but it never sums up. Publishers talk about diversifying their revenues. And evidently, some of them have managed to achieve the success particularly those who have focussed on it for a while.
Being a publishing firm, it's always suggested to start with user persona followed by designing web pages in such a way that the ads load with maximum viewability.
Try to design a web page where the ad unit will appear “above the fold” or a “sticky ad unit”. Sticky ad units are a type of ad that remains locked in a specific location when the user scrolls.
Another key consideration for ad viewability is speed. Sites that are laden with ads from multiple ad networks can typically take a long time to load. Consider applying techniques that can speed up ad delivery can greatly improve ad viewability.
Here are some of the best practices that can help you to enhance ad viewability and increase Revenue.
Digiday quoted Jim Norton, the former chief business officer at Condé Nast, saying “No single stream of alternative revenue will make up for the declines that we’re seeing in advertising,” said.
How to measure viewable impressions?
According to the Interactive Advertising Bureau (IAB), an ad which appears at least 50 percent on screen for one second or longer for display ads and two seconds or longer for video ads is considered as a viewable impression.
Bottomline: Buying a "viewable" ad impression does not guarantee that it's going to be seen and/or clicked on. However, there are several other ways you can ensure the chances of your ad being viewed. It’s also important to understand the different metrics that online ad success cannot be determined by views and clicks alone. Furthermore, you need to consider the entire buyer journey.
If you are a publishing firm looking for experts to integrate the needs of a publishing platform with Drupal we can help. Get in touch.
With the advances in technology, automation is playing a key role in software development processes as it enables the team to verify regression test, functionality and run tests simultaneously in most efficient way. Note that technology is no longer stable and continuously evolves. Similarly, web-based applications like Drupal and other frameworks consistently enhance the tools in order to grab the market attention. Further, automation is when compared to the best choice for web-based applications as verifying and testing the application interfaces are comparatively easier than previous web applications.
Before we talk about Behaviour Driven Development (BDD), let’s have a look at why automated tests.
Improves speed
Better Test Coverage
Better efficiency
Boosts Developers & Testers Morale
Require less human resources
Cost efficient
What is BDD?
BDD is a methodology that is used to develop softwares/projects through example-based communication between developers, QA, Project Managers and business team.
The primary goal of BDD is to improve communication between business team by understanding functional requirements from all the members of development team to avoid ambiguities of requirements. This methodology helps in delivering software that assists in continuous communication, deliberate discovery and test-automation.
Why should we follow BDD methodology?
BDD is an extension of TDD (Test Driven Development). Like in TDD, in BDD also we test first and add application code. This is easy to describe using ubiquitous language. Further, BDD follows example based communication process between teams, QA and business clients.
Example Based Communication
This helps business team and developers to clearly understand the clients requirement. BDD is largely facilitated through the use of domain specific language by using natural language constructs.
Gherkin
It represents “Defining behavior” writing features using gherkin language. Behat is a tool to test the behavior of your application which is described in a special language called gherkin especially for behavior descriptions.
Gherkin Structure
Behat is a BDD (Behavior Driven Development) for PHP framework for auto testing your business expectations. Behat is used to check the test cases written in Gherkin structure.
Example structure of Feature File:
Gherkin keywords and its descriptions as follows: Feature: This is a descriptive section of what is desired starts the feature and gives it a title. Behat doesn’t parse next 3 lines of text which specifies the context to the people reading your feature. Scenario: This is something like determinable business situation starts the scenario, it contains description of the scenario. Steps: Feature consists of steps known as Givens, Whens and Thens. Behat doesn’t technically differentiate between these three kind of steps. Each feature file can contain single scenario to test the behavior of our application. Similarly, feature file can have multiple scenarios to test the behavior. Given: It defines the initial state of the system for the scenario. When: This describes the action taken by the person/role. Then: Describes the observable system state after the action has been performed. And/But: Can be added to create multiples of Given/When/Then lines.
Prerequisites:
PHP higher than 5.3.5 Libraries should install “curl, mbstring, xml” (Behat is a library it can be easily installed by using composer)
Installing Behat
Step 1: Follow the commands in terminal to create composer.json file. Install behat in any path wherever you want... let’s say in root folder path.
If you want to test javascript testing with Selenium, you can install with selenium2-driver else it is not required to install.
Start using Behat in your project to call vendor/bin/behat --init. This will setup a features directory in behat directory.
Step 2: Open FeatureContext.php file under /behat/features/bootstrap after that run “init command” and add the below code snippet.
use Behat\Behat\Context\SnippetAcceptingContext;
use Behat\MinkExtension\Context\MinkContext;
Step 3: Extend your FeatureContext class with MinkContext and implement with SnippetAcceptingContext and Context.
Step 4: Now create a config file called behat.yml in behat directory
We should specify the base URL to test which instance by behat in behat.yml file.
Goutte driver will act as a bridge between behat and your business application.
Wd_host is nothing but the localhost URL so it can be 127.0.0.1:4444/wd/hub for Selenium integration with behat.
Note: If you want to test with Selenium integration, you should downgrade your Firefox version to 46. Selenium standalone server version should be 2.52.0 and your Firefox driver should be geckodriver version 0.17.0. Just download the zip file as it is enough to start the Selenium server.
Currently, Selenium integration successfully working in firefox version 46 with appropriate other Firefox drivers. If you want to test with Firefox means you should change browser_name: firefox in behat.yml file.
In feature file, we should mention “@javascript” before scenario starts then only it will recognize the Selenium server to start browser testing.
Starting Selenium server
Start Selenium server for javascript testing
java -Dwebdriver.GeckoDriver.driver="GeckoDriverdriver" -jar selenium-server-standalone-2.52.0.jar to start selenium server
(or)
java -jar selenium-server-standalone-2.44.0.jar
Don’t forget to specify @javascript in feature file to run Selenium testing.
Create feature file under /behat/features/login.feature
Once done with features, scenarios, and steps. Finally, run the feature file in terminal. Path should be your application where installed: vendor/bin/behat this will run all features scripts.
You can also run single feature file like vendor/bin/behat features/file_name.feature.
If you want to run the Selenium javascript testing with slow time monitoring you can do like this (scenario will be like “And I wait for 2”) “iWaitFor” function is nothing but a step, which is defined in feature file like “And I wait for 2” number specified as in seconds.
Similarly, I have given example here for triggering “Enter” keyboard button.
vendor/bin/behat -dl this command will show all the list of behat availability options
Sample output:
We have covered the detailed descriptions of behavior driven development followed up by example based communication between teams, QA, and business clients. We also touched Gherkin structure, usage of behat tool and installation procedures. This should have give you the overall idea about javascript automation testing. Feel free to share your experiences and any issue you come across.
Below given is a presentation on "Behavior Driven Development".
Media and publishing companies typically run their business based on segmented revenue streams, like advertising, promotions etc. Here revenue stream often reports vertically to ensure the steep rise in global annual turnover. But could it be the right time for the publisher to focus on paid ARPU, subscription and memberships?
Thanks to major content players like Netflix, Amazon, Spotify and the other who have leveraged subscription model and proved users are willing to pay for the quality content. Perhaps, it’s high time when the publishing industry should focus more on fixing their business model and less on ways to further enhance the content production.
We are, in the simplest terms, a subscription-first business. Our focus on subscribers sets us apart in crucial ways from many other media organizations. We are not trying to maximize clicks and sell low-margin advertising against them. We are not trying to win a pageviews arms race. We believe that the more sound business strategy for The Times is to provide journalism so strong that several million people around the world are willing to pay for it.
With this, the question arises - how these organizations are increasing their ARPU with paid subscriptions, memberships, events, and lead generation.
First, publishers need to work on their content strategy and user experience to boost audience engagement. Simultaneously, they need to consider their audience base and figure out their goal. They can also think about - what is the unique value proposition you can offer to your client? And how to ensure they are aware of it?
Of course, here, introducing a premium model should be the first step, but more needs to be done if the transition is to be successful. Let’s have a look at four key areas publishing houses can work on.
Paid Subscription: Publishers are turning directly to site viewers for paid subscriptions rather than relying on advertising. Typically, a subscription model requires users to pay money to get access to a product or service. It works best when the company provides highly specialized and unique information that readers can’t find anywhere else.
Membership: A membership is the notion of belonging. It says nothing of cost or price, though most memberships end up having a cost to them. Membership programs are exclusive as it has tremendous benefits. Being a valuable member gets you access to other members – which may be the thing that is most valued.
Events: Hosting events to diversify revenue streams is nothing new. Often event organizers combine their resources showcasing the magazine’s content as part of a unique offering. A planned occasion not only provides an additional revenue stream but also increases subscribers base. According to the American Press Institute, “Incorporating an events strategy with publishing also strengthens a publisher’s brand and bottom line while deepening connections with audiences, sponsors, and advertisers.”
Lead Generation: It typically starts with data collection pushing prospects to landing pages and asking them to fill lead-gen collection forms to get free ebooks, whitepaper, and other resources. These data are used to better understand the leads, engage with them and enhance lead-gen programs. Here, email marketing helps publishers to reach their targeted audience.
Further, publishers can utilize proven online marketing methods, such as webinars, co-registration, affiliate partnerships, search engine optimization and others.
Are you interested in getting a subscription feature for your own website to provide services to your customers? Not only we help you in keeping track of customers and their subscriptions, but we also handle end-to-end Drupal management services. Get in touch with our experts to find out how you can use the subscription model to increase your ARPU.
Media and publishing companies typically run their business based on segmented revenue streams, like advertising, promotions etc. Here revenue stream often reports vertically to ensure the steep rise in global annual turnover. But could it be the right time for the publisher to focus on paid subscription and memberships?
Thanks to major content players like Netflix, Amazon, Spotify and the other who have leveraged subscription model and proved users are willing to pay for the quality content. Perhaps, it’s high time when the publishing industry should focus more on fixing their business model and less on ways to further enhance the content production.
We are, in the simplest terms, a subscription-first business. Our focus on subscribers sets us apart in crucial ways from many other media organizations. We are not trying to maximize clicks and sell low-margin advertising against them. We are not trying to win a pageviews arms race. We believe that the more sound business strategy for The Times is to provide journalism so strong that several million people around the world are willing to pay for it.
With this, the question arises - how these organizations are increasing their ARPU with paid subscriptions, memberships, events, and lead generation.
First, publishers need to work on their content strategy and user experience to boost audience engagement. Simultaneously, they need to consider their audience base and figure out their goal. They can also think about - what is the unique value proposition you can offer to your client? And how to ensure they are aware of it?
Of course, here, introducing a premium model should be the first step, but more needs to be done if the transition is to be successful. Let’s have a look at four key areas publishing houses can work on.
Paid Subscription: Publishers are turning directly to site viewers for paid subscriptions rather than relying on advertising. Typically, a subscription model requires users to pay money to get access to a product or service. It works best when the company provides highly specialized and unique information that readers can’t find anywhere else.
Membership: A membership is the notion of belonging. It says nothing of cost or price, though most memberships end up having a cost to them. Membership programs are exclusive as it has tremendous benefits. Being a valuable member gets you access to other members – which may be the thing that is most valued.
Events: Hosting events to diversify revenue streams is nothing new. Often event organizers combine their resources showcasing the magazine’s content as part of a unique offering. A planned occasion not only provides an additional revenue stream but also increases subscribers base. According to the American Press Institute, “Incorporating an events strategy with publishing also strengthens a publisher’s brand and bottom line while deepening connections with audiences, sponsors, and advertisers.”
Lead Generation: It typically starts with data collection pushing prospects to landing pages and asking them to fill lead-gen collection forms to get free ebooks, whitepaper, and other resources. These data are used to better understand the leads, engage with them and enhance lead-gen programs. Here, email marketing helps publishers to reach their targeted audience.
Further, publishers can utilize proven online marketing methods, such as webinars, co-registration, affiliate partnerships, search engine optimization and others.
Are you interested in getting a subscription feature for your own website to provide services to your customers? Not only we help you in keeping track of customers and their subscriptions, but we also handle end-to-end Drupal management services. Get in touch with our experts to find out how you can use the subscription model to increase your ARPU.
Having a hard time to find out a javascript that can help in displaying the stock and timeline charts on your web/mobile application. Recently, I was working on a Drupal project where clients requirement was to add a similar feature to their web application. While doing secondary research our team came across Highstock - a javascript library - that allows you to create general timeline charts and insert them on the website.
Have a look at what exactly is Highstock?
Highstock helps in displaying the stock and timeline charts for web/mobile application based on certain data. Highstock chart offers a wide range of features like basic navigator series, date range, date picker, scrolling bar. Still wondering how to use this feature to its fullest - integrate Drupal 8 Rest API with Highstock javascript library.
Integrating Drupal 8 REST API with Highstock javascript library.
Step 1: Create a custom module. In my case, I will be creating a module name Highstock.
Step 3: Create highstock.libraries.yml file to add highstock library.
Step 4: Create Rest API Resource, which provides the input for the chart.
Highstock accept the input in the following format: It requires the array structure, within that add x-axis, y-axis data with comma separated. So while creating REST API we need to generate the output in the following format.
Step 4.1: In Drupal 8, create HighstockChart.php file inside /src/Plugin/rest/resource.
Step 5: Create a highstock_chart.js file to integrate REST API output with a highstock library.
Highstock library provides various types of charts like single line series, line with marker and shadow, spline, step line, area spline etc. You can find the types of charts here https://www.highcharts.com/stock/demo.
In js, we have to call the API which gives you the JSON output. Based on type added for the chart output will be shown.
Step 6: Create a block HighstockChartBlock.php to show the chart.
Place the above block in any of your desired regions and it will display the chart like below:
Default JS provides the following properties:
Range selector
Data range
Scrollbar at the bottom.
Right side menu icon will provide an option to Print chart, download chart in PNG, JPEG, SVG, and PDF format.
Mouse Hovering will give Marker with x-axis and y-axis highlighted.
Properties of Highstock Chart:
Highstock javascript library provides several properties and methods to configure the chart. All these configurations can be found in highstock API reference. https://api.highcharts.com/highstock/.
We can modify charts using those properties. I have referred to the above link and configured my charts as mentioned below:
We must add the above properties in highstock_chart.js of your custom module. After applying all properties chart will look similar to the below image.
This API is very handy when it comes to representing Complex Data structures to end users in the form of colorful charts. You should definitely pitch this one to the clients if they are still showing Data in traditional Tables, Excel sheet, etc. Hope now you can easily integrate Drupal 8 Rest API with Highstock. If you have any suggestions or queries please comment down let me try to answer.
Async and Await are extensions of promises. So if you are not clear about the basics of promises please get comfortable with promises before reading further. You can read my post on Understanding Promises in Javascript.
I am sure that many of you would be using async and await already. But I think it deserves a little more attention. Here is a small test : If you can’t spot the problem with the below code then read on.
for (name of ["nkgokul", "BrendanEich", "gaearon"]) {
userDetails = await fetch("https://api.github.com/users/" + name);
userDetailsJSON = await userDetails.json();
console.log("userDetailsJSON", userDetailsJSON);
}
We will revisit the above code block later, once we have gone through async await basics. Like always Mozilla docs is your friend. Especially checkout the definitions.
async and await
From MDN
An asynchronous function is a function which operates asynchronously via the event loop, using an implicit Promise to return its result. But the syntax and structure of your code using async functions is much more like using standard synchronous functions.
I wonder who writes these descriptions. They are so concise and well articulated. To break it down.
The function operates asynchronously via event loop.
It uses an implicit Promise to return the result.
The syntax and structure of the code is similar to writing synchronous functions.
And MDN goes on to say
An async function can contain an await expression that pauses the execution of the async function and waits for the passed Promise's resolution, and then resumes the async function's execution and returns the resolved value. Remember, the await keyword is only valid inside async functions.
Let us jump into code to understand this better. We will reuse the three function we used for understanding promises here as well.
A function that returns a promise which resolves or rejects after n number of seconds.
Two more deterministic functions one which resolve after n seconds and another which rejects after n seconds.
var promiseTRSANSG = (promiseThatResolvesAfterNSecondsGenerator = function(
n = 0
) {
return new Promise(function(resolve, reject) {
setTimeout(function() {
resolve({
resolvedAfterNSeconds: n
});
}, n * 1000);
});
});
var promiseTRJANSG = (promiseThatRejectsAfterNSecondsGenerator = function(
n = 0
) {
return new Promise(function(resolve, reject) {
setTimeout(function() {
reject({
rejectedAfterNSeconds: n
});
}, n * 1000);
});
});
Since all these three functions are returning promises we can also call these functions as asynchronous functions. See we wrote asyn functions even before knowing about them.
If we had to use the function promiseTRSANSG using standard format of promises we would have written something like this.
var promise1 = promiseTRSANSG(3);
promise1.then(function(result) {
console.log(result);
});
There is a lot of unnecessary code here like anonymous function just for assigning the handlers. What async await does is it improves the syntax for this which would make seem more like synchronous code. If we had to the same as above in async await format it would be like
result = await promiseTRSANSG(3);
console.log(result);
Well that look much more readable than the standard promise syntax. When we used await the execution of the code was blocked. That is the reason that you had the value of the promise resolution in the variable result. As you can make out from the above code sample, instead of the .then part the result is assigned to the variable directly when you use await You can also make out that the .catch part is not present here. That is because that is handled using try catch error handling. So instead of using promiseTRSANSlet us use promiseTRRARNOSG Since this function can either resolve or reject we need to handle both the scenarios. In the above code we just wrote two lines to give you an easy comparison between the standard format and async await format. The example in next section gives you a better idea of the format and structure.
General syntax of using asyncawait
async function testAsync() {
for (var i = 0; i < 5; i++) {
try {
result1 = await promiseTRRARNOSG();
console.log("Result 1 ", result1);
result2 = await promiseTRRARNOSG();
console.log("Result 2 ", result2);
} catch (e) {
console.log("Error", e);
} finally {
console.log("This is done");
}
}
}
test();
From the above code example you can make out that instead of using the promise specific error handling we are using the more generic approach of using trycatch for error handling. So that is one thing less for us to remember and it also improves the overall readability even after considering the try catch block around our code. So based on the level of error handling you need you can add any number of catch blocks and make the error messages more specific and meaningful.
Pitfalls of using async and await
async await makes it much more easier to use promises. Developers from synchronous programming background will feel at home while using asyncand await. This should also alert us, as this also means that we are moving towards a more synchronous approach if we don’t keep a watch.
The whole point of javascript/nodejs is to think asynchronous by default and not an after though. async await generally means you are doing things in sequential way. So make a conscious decision whenever you want to use to async await.
Now let us start analysing the code that I flashed at your face in the beginning.
for (name of ["nkgokul", "BrendanEich", "gaearon"]) {
userDetails = await fetch("https://api.github.com/users/" + name);
userDetailsJSON = await userDetails.json();
console.log("userDetailsJSON", userDetailsJSON);
}
This seems like a harmless piece of code that fetches the github details of three users “nkgokul”, “BrendanEich”, “gaearon” Right. That is true. That is what this function does. But it also has some unintended consequences.
Before diving further into the code let us build a simple timer.
startTime = performance.now(); //Run at the beginning of the code
function executingAt() {
return (performance.now() - startTime) / 1000;
}
Now we can use executingAt wherever we want to print the number of seconds that have surpassed since the beginning.
async function fetchUserDetailsWithStats() {
i = 0;
for (name of ["nkgokul", "BrendanEich", "gaearon"]) {
i++;
console.log("Starting API call " + i + " at " + executingAt());
userDetails = await fetch("https://api.github.com/users/" + name);
userDetailsJSON = await userDetails.json();
console.log("Finished API call " + i + "at " + executingAt());
console.log("userDetailsJSON", userDetailsJSON);
}
}
Checkout the output of the same.
async-await analysed
As you can find from the output, each of the await function is called after the previous function was completed. We are trying to fetch the details of three different users“nkgokul”, “BrendanEich”, “gaearon” It is pretty obvious that output of one API call is in noway dependent on the output of the others.
The only dependence we have is these two lines of code.
We can create the userDetailsJSON object only after getting the userDetails. Hence it makes sense to use await here that is within the scope of getting the details of a single user. So let us make an async for getting the details of the single user.
async function fetchSingleUsersDetailsWithStats(name) {
console.log("Starting API call for " + name + " at " + executingAt());
userDetails = await fetch("https://api.github.com/users/" + name);
userDetailsJSON = await userDetails.json();
console.log("Finished API call for " + name + " at " + executingAt());
return userDetailsJSON;
}
Now that the fetchSingleUsersDetailsWithStats is async we can use this function to fetch the details of the different users in parallel.
async function fetchAllUsersDetailsParallelyWithStats() {
let singleUsersDetailsPromises = [];
for (name of ["nkgokul", "BrendanEich", "gaearon"]) {
let promise = fetchSingleUsersDetailsWithStats(name);
console.log(
"Created Promise for API call of " + name + " at " + executingAt()
);
singleUsersDetailsPromises.push(promise);
}
console.log("Finished adding all promises at " + executingAt());
let allUsersDetails = await Promise.all(singleUsersDetailsPromises);
console.log("Got the results for all promises at " + executingAt());
console.log(allUsersDetails);
}
When you want to run things in parallel, the thumb rule that I follow is
Create a promise for each async call. Add all the promises to an array. Then pass the promises array to Promise.all This in turn returns a single promise for which we can use await
When we put all of this together we get
startTime = performance.now();
async function fetchAllUsersDetailsParallelyWithStats() {
let singleUsersDetailsPromises = [];
for (name of ["nkgokul", "BrendanEich", "gaearon"]) {
let promise = fetchSingleUsersDetailsWithStats(name);
console.log(
"Created Promise for API call of " + name + " at " + executingAt()
);
singleUsersDetailsPromises.push(promise);
}
console.log("Finished adding all promises at " + executingAt());
let allUsersDetails = await Promise.all(singleUsersDetailsPromises);
console.log("Got the results for all promises at " + executingAt());
console.log(allUsersDetails);
}
async function fetchSingleUsersDetailsWithStats(name) {
console.log("Starting API call for " + name + " at " + executingAt());
userDetails = await fetch("https://api.github.com/users/" + name);
userDetailsJSON = await userDetails.json();
console.log("Finished API call for " + name + " at " + executingAt());
return userDetailsJSON;
}
fetchAllUsersDetailsParallelyWithStats();
The output for this is
Promises run in parallel with timestamps
As you can make out from the output, promise creations are almost instantaneous whereas API calls take some time. We need to stress this as time taken for promises creation and processing is trivial when compared to IO operations. So while choosing a promise library it makes more sense to choose a library that is feature rich and has better dev experience. Since we are using Promise.all all the API calls were run in parallel. Each API is taking almost 0.88 seconds. But since they are called in parallel we were able to get the results of all API calls in 0.89 seconds.
In most of the scenarios understanding this much should serve us well. You can skip to Thumb Rules section. But if you want to dig deeper read on.
Digging deeper into await
For this let us pretty much limit ourselves to promiseTRSANSG function. The outcome of this function is more deterministic and will help us identify the differences.
asynchronous execution starts as soon as the promise is created. await just blocks the code within the async function until the promise is resolved. Let us create a function which will help us clearly understand this.
var concurrent = async function() {
startTime = performance.now();
const resolveAfter3seconds = promiseTRSANSG(3);
console.log("Promise for resolveAfter3seconds created at ", executingAt());
const resolveAfter4seconds = promiseTRSANSG(4);
console.log("Promise for resolveAfter4seconds created at ", executingAt());
resolveAfter3seconds.then(function(){
console.log("resolveAfter3seconds resolved at ", executingAt());
});
resolveAfter4seconds.then(function(){
console.log("resolveAfter4seconds resolved at ", executingAt());
});
console.log(await resolveAfter4seconds);
console.log("await resolveAfter4seconds executed at ", executingAt());
console.log(await resolveAfter3seconds);
console.log("await resolveAfter3seconds executed at ", executingAt());
};
concurrent();
Concurrent start and then await
From previous post we know that .then is even driven. That is .then is executed as soon as the promise is resolved. So let us use resolveAfter3seconds.thenand resolveAfter4seconds.then to identify when our promises are actually resolved. From the output we can see that resolveAfter3seconds is resolved after 3 seconds and resolveAfter4secondsis executed after 4 seconds. This is as expected.
Now to check how await affects the execution of code we have used
As we have seen from the output of .thenresolveAfter3seconds resolved one second before resolveAfter4seconds . But we have the await for resolveAfter4seconds and then followed by await for resolveAfter3seconds
From the output we can see that though resolveAfter3seconds was already resolved it got printed only after the output of console.log(await resolveAfter4seconds); was printed. Which reiterates what we had said earlier. await only blocks the execution of next lines of code in asyncfunction and doesn’t affect the promise execution.
Disclaimer
MDN documentation mentions that Promise.all is still serial and using .then is truly parallel. I have not been able to understand the difference and would love to hear back if anybody has figured out their heard around the difference.
Thumb Rules
Here are a list of thumb rules I use to keep my head sane around using asyncand await
aync functions returns a promise.
async functions use an implicit Promise to return its result. Even if you don’t return a promise explicitly async function makes sure that your code is passed through a promise.
await blocks the code execution within the async function, of which it(await statement) is a part.
There can be multiple await statements within a single async function.
When using async await make sure to use try catch for error handling.
If your code contains blocking code it is better to make it an asyncfunction. By doing this you are making sure that somebody else can use your function asynchronously.
By making async functions out of blocking code, you are enabling the user who will call your function to decide on the level of asynhronicity he wants.
Be extra careful when using await within loops and iterators. You might fall into the trap of writing sequentially executing code when it could have been easily done in parallel.
await is always for a single promise. If you want to await multiple promises(Run this promises in parallel) create an array of promises and then pass it to the Promise.all function.
Promise creation starts the execution of asynchronous functionality.
await only blocks the code execution within the async function. It only makes sure that next line is executed when the promise resolves. So if an asynchronous activity has already started then await will not have an effect on it.
Please point out if I am missing something here or if something can be improved.
The last few months have been quite challenging for media & publishing enterprises dealing with EU’s new data privacy law - GDPR and Drupal highly critical vulnerability - DrupalGeddon 2.
On 28 March, Drupal announced the alerts about DrupalGeddon 2 (SA-CORE-2018-002 / CVE-2018-7600) - which was later patched by the security team. The vulnerability was potential enough to affect the vast majority of Drupal 6, 7 and 8 websites.
Earlier in October 2014, Drupal faced similar vulnerability - tagged as DrupalGeddon. At that time, the security patch was released within seven hours of the critical security update.
So here the question is - how vulnerable is Drupal?
In short, we can’t specify exactly how vulnerable is Drupal as it entirely depends on the context. Possibly, you will find the answer to this question in one of our previous post where we talked about “Drupal Security Advisor Data”.
Implement these measures to secure your Drupal website
1. Upgrade to the latest version of Drupal
Whether it is your operating system, antivirus or Drupal itself, running the latest version is always suggested. And this is the least you can and should do to protect your website.
The updates not only bring new features but also enhances security. Further, you should also keep updating modules as it is most often the cause of misery. It's always recommended to check for the update report and keep updating at regular interval. The latest version is Drupal 8.3.1.
Note that it is older versions of CMS that hackers usually target as they are more vulnerable.
2. Remove unnecessary modules
Agreed that the modules play a critical role in enhancing user experience. However, you should be wary of downloading as it increases vulnerability. Also, ensure that the module has a sizable number of downloads.
In case even if some vulnerability occurs, it will be resolved quickly by the community as it can affect a major chunk of companies/individuals. Furthermore, you can remove/uncheck the unused modules or completely uninstall it.
3. Practice strong user management
In a typical organization, several individuals require access to the website to manage different areas within it. These users are potential enough to be a reason for security breach so it is important to keep control of their permissions.
Give limited access to the site, instead of giving access to the whole site by default. And when the user leaves the organization they should be promptly removed from the administrator list to eliminate any unnecessary risk. Read on for a quick review “managing user roles & permission in Drupal 8”.
4. Choose a proper hosting provider
It's always a dilemma to figure out - which hosting provider should we trust for our website? Not to mention hosting provider plays a key role in ensuring the security of the website. Look for a hosting provider, which offers a security-first Drupal hosting solution with all the server side security measure like SSL.
5. Enable HTTPS
As a core member of the development team/business owner/decision makers, it's your responsibility to take the ownership of the security of your enterprise website.
Consider performing a check for common vulnerabilities at regular interval as it will allow you to make quick work of those holes by following the prompts. Here is what Drupal experts have to say about "securing users private data from unauthorized access".
6. Backup regularly
Plan for the worst. Keep your codebase and database handy. There can be a number of reasons, both accidental and intentional, that can destroy your hard work. Here is the list of reasons why you should regularly backup your website.
General safety
The original version of your site has aged
Respond quickly if your site is hacked
Updates went wrong
To sum up, you need to follow the above-mentioned steps in order to secure your Drupal website. Also, reporting a security breach to the Drupal community can be an effective way to patch the issue and seek help from the community to avoid massive risk.
Well the title was a hyperbole. Now that I have got your attention let us get started. It might be stretch as of today that we can kill twitter. But in this post I would like to show that it many not be impossible after all, at-least in a couple of years.
A few things to know before we start killing twitter.
It starts with realising the you are doing a favour to twitter and twitter is not doing a favour to you. Yes I agree that twitter has been a great tool and it even led to many Arab Spring. Checkout Social Media Made the Arab Spring, But Couldn't Save Itforfurther details.
But we need to realise that while these are the pleasant side-effects of twitter/social media, for a service or business to be sustainable it has to be profitable or at-least should have the profit generating potential in the future. Irrespective of whether the services is following a ad revenue based model or freemium model one thing is in common. Either you have to pay up for the services or the service needs to sell something to somebody.
Understanding what is that something that is sold and to whom it is sold is important.
Let us start with the most quoted quote regarding the free services or seemingly free services.
Most of the social media users forget the value they are adding to the networks. It is easier for us to see a blog post or a video as as data/content. But we fail to realise that even the short status updates that we do on and our comments on them in social media websites are also content.
Every action that we do on social media is valuable and it adds to the valuation of the platform. How much is that action valued and how is it valued requires a detailed analysis (Will be following up this post with couple of related posts about this topic). But for now let us understand this much.
Every action that we do on a social media website falls into one of the following categories.
Content creation
Content curation
Content Distribution
Training the AI models.
I have tried to highlight the same in this tweet of mine.
It is difficult for people to understand this as they cannot see it clearly or rather there is no way for them to understand this. It only becomes clear in some conversations like the following. In February this year when aantonop was complaining about how facebook was locking him out, one of the users mentioned this.
Anton’s reply was interesting.
So it brings us to the question who is benefitting from whom. Is the platform benefitting from the user or is the user benefitting from the platform. At the least it is a synergy between the platform and user. At worst the platform is ripping of your data and making a hell lot of money while not rewarding you in anyway.
What is your data worth?
Data and the value it creates is has different lifetimes and there are lot of overlaps. So it is difficult to put a value to it. Let use a very crude way to identify the average minimum value of our data on Facebook. Facebook is valued at 600 Billion USD today. There are around 2 billion users on Facebook. Since Facebook makes money primarily by showing ads or/and selling your data :P , data created by each user should be worth at-least 300 USD.
One thing that everybody seems to agree is that data is the new oil and it is valuable. But what most of us fail to understand is that oil has a single lifecycle but whereas data has multiple life-cycles. So any valuation you put to a data piece is only a moving value that is affected by various parameters. We also need to realise that data that we consider archived or stale also revenue generating potential in the future. AI models will need a lot of data going forward and will unlock the revenue generating potential of your data. In the following article you can checkout how Bottos and Databroker DAO or unlocking the potential of data from various sources.
The two ways to realise the true value of your data
There are two ways you will realise that your data is worth something.
One : Have somebody like Zuck sell your data and make billions in the process.
Two : look at the real money people make with data.
1. When your data is sold
Cambridge Analytics expose happened on March 17, 2018. This expose made it clear that the user targeting is not just for ads and can be used for much more. There were serious concerns about users’ privacy. The expose once again proved that privacy is dead. What is more disturbing is that experts expressed that this might have a serious effect on Facebook’s future valuations. But that turned out to be completely false. Can you spot the dip in Facebook marketcap because of this scandal? I have highlighted this in red circle for you towards the right end of the graph. This is what I would call “A major dip in the short term but a minor blip in the long term”. The quick correction back to the trend line only suggests that nobody takes privacy seriously any more.
Facebook Marketcap
2. When you look at real money people make with your data
I am sure that Andreas M. Antonopoulos knows the value of data. I am just taking this example as it was a high profile case where data created elsewhere was able to generate revenues in some other platform because of the data distribution. The interesting thing is that in this case the money made was being used for translating aantonop’s videos to other language. You can read more about it here.
The real aanntonop
Aantonop made the above post which can be called as “Proof of Identity” post verifying that he is the real aantonop. The post gathered a lot of attention and has rewards of 1449 USD. I just hope that Aantonop claims the amount one day and starts using Steem more frequently.
I took Aantonop’s example because he is very popular in the world of Bitcoin and his videos have helped many entrepreneurs to take a plunge into Bitcoin. His videos are proof that well made content has a long shelf life and has revenue generating potential even outside the platforms that the content was created in.
Now lets gets back to our original question.
How to kill twitter?
This might seem like an impossible proposition to many. Let us look at the reasons why it is difficult to kill twitter or facebook for that matter.
I don’t need another social network.
I first got to know about Robert Scoble from Google+ days. I invited him to checkout Steemit platform and he replied with “I don’t need another social network.” Today we are in age where we have a social media overload. The new social media platforms needs to cross the critical mass for all the others to follow up. Replacing Facebook might be impossible for the next few years but we might have a chance to replace twitter with a decentralised version. Facebook has too much of a lead. It has your photos, videos, friends, memories, groups and pages. Any new entrant needs to address all these to overcome Facebook. Whereas with respect to twitter a limited feature set with additional benefits should be able to sway the needle in the new entrant’s favour.
So for now let us assume given enough motivation users might consider shifting to the new platform.
Twitter has first mover advantage
Twitter is huge. Twitter has first mover advantage. Yes that might be the case. But last year has proven the with right incentive models you can have a jumpstart. Binance became the fastest unicorn in history.
So don’t be surprised if a new entrant replaces bitter in less than a year.
Show me the money
Attributing a value to content is a tough task. There have many unsuccessful attempts in the past. I think Steem blockchain has come further than any other attempts. By incentivising both content creation and content curation steem has figured out a subjective way to attribute value to content. With the release of SMTs later this year the community will only get better at arriving at closer estimations for the value of posts. When people were told that their content is worth something they were not able to relate to it. With platforms like Steem having put definitive value to content and having paid the same to the content creators (which many have en-cashed to to FIAT) the idea is more palpable now. Monetary incentives can do wonders and as more people get to know about these platforms the effect will only get compounded.
Hitting the critical mass
To be a serious contender to twitter the new platform needs to hit the critical mass. This can be the real challenge. So here are the things that can be done.
Create a distributed cryptocurrency on the lines of Steem (Especially the rewards mechanism part.) Keep the interface, UX and restrictions(like number of characters) very similar to twitter. So that people feel at home ;)
In addition to the normal account creation have a preserved namespace twitter-[twitter-handle]. This will be reserved for creating one to one mapping of user accounts from twitter to the new blockchain.
The user accounts for each user on twitter are also created on the new platform. Both username and passwords(private keys) will be created. Twitter users can claim their password by sending a tweet to twitter handle of new blockchain. The password or private keys will be dm’ed to users.
Since all tweets are public duplicate them in the new platform under the users accounts. If that is a stretch then it can be started with latest tweets of popular accounts and then it can be expanded slowly.
The beta users will have access to popular content on the new platform. Their retweets and likes of tweets will decide the value of the new tweets mirrors from twitter.
While users might be hesitant to create new accounts I think there will be very few people who will not be happy to claim their accounts. Especially when they know that there are rewards waiting for them to en-cash for the content they have created.
The incentive or the rewards to be received on the new platform will be bigger for the users with huge number of followers. (Assuming that their content is also liked by the beta users on the new platform). So if these influencers move to the new platform, they will also bring along at-least some part of their followers.
Considering that the content on blockchain will be censor resistant and it rewards good content the platform should be able to take of hit the critical mass very soon.
I am not sure what will be the legal issues surrounding an attempt like these. But I think this is something definitely worth trying. A few crypto-millionaires coming together should have enough funds to try something like this. What do you think? Will an attempt like this work? Share your thoughts.