There is a lot of buzz around ad viewability nowadays. In the world of digital marketing, it has become such a hot topic that every firm is looking to build their presence online. But what exactly does it mean to be “viewable”? Does this mean people will look at your ad? Or is it a silver bullet industry is looking for?
Let’s solve this puzzle. In this post, we will discuss everything you need to know about ad viewability such as:
What is viewability?
Why is viewability important?
When is an ad impression not viewable?
Best practices to enhance viewability to increase views
How to measure viewable impressions?
Let’s begin with the basics, what is viewability?
Ad viewability is the concept of showing how visible your ad is on a website and to users. In other words, viewability is an online advertising metrics that tracks only impressions actually seen by users.
Why is viewability important?
High viewability rate indicates a high-quality placement, which can be valuable to advertisers as a Key Performance Indicator (KPI). The ad viewability helps marketers to calculate the relative success (click through ration or CTR) of a campaign by dividing the number of ads served by the number of clicks.
Also, there are other factors that contribute to an ad not being seen such as users clicking away from a page before the ad loaded or even bots or proxy servers opening pages, rather than live human beings.
When is an ad impression not viewable?
There is an assumption that header spots - placements that appear before the content the user has selected representing the placement with the best viewability. Simultaneously, the ad which is placed somewhere in the middle of the site or near to the relevant part of the content is an attractive alternative. These places represent a good compromise in the ration between price and viewability.
For instance, if an ad is loaded at below the fold (bottom of a webpage) but a reader doesn’t scroll down far to see it then impression will not be considered viewable.
According to Google, the ad should be placed right above the fold and not at the top of the page. The most viewable ad sizes are the vertical size units such as 160x600.
Best practices to enhance viewability to increase views
Much has been talked about the best practices, but it never sums up. Publishers talk about diversifying their revenues. And evidently, some of them have managed to achieve the success particularly those who have focussed on it for a while.
Being a publishing firm, it's always suggested to start with user persona followed by designing web pages in such a way that the ads load with maximum viewability.
Try to design a web page where the ad unit will appear “above the fold” or a “sticky ad unit”. Sticky ad units are a type of ad that remains locked in a specific location when the user scrolls.
Another key consideration for ad viewability is speed. Sites that are laden with ads from multiple ad networks can typically take a long time to load. Consider applying techniques that can speed up ad delivery can greatly improve ad viewability.
Here are some of the best practices that can help you to enhance ad viewability and increase Revenue.
Digiday quoted Jim Norton, the former chief business officer at Condé Nast, saying “No single stream of alternative revenue will make up for the declines that we’re seeing in advertising,” said.
How to measure viewable impressions?
According to the Interactive Advertising Bureau (IAB), an ad which appears at least 50 percent on screen for one second or longer for display ads and two seconds or longer for video ads is considered as a viewable impression.
Bottomline: Buying a "viewable" ad impression does not guarantee that it's going to be seen and/or clicked on. However, there are several other ways you can ensure the chances of your ad being viewed. It’s also important to understand the different metrics that online ad success cannot be determined by views and clicks alone. Furthermore, you need to consider the entire buyer journey.
If you are a publishing firm looking for experts to integrate the needs of a publishing platform with Drupal we can help. Get in touch.
With the advances in technology, automation is playing a key role in software development processes as it enables the team to verify regression test, functionality and run tests simultaneously in most efficient way. Note that technology is no longer stable and continuously evolves. Similarly, web-based applications like Drupal and other frameworks consistently enhance the tools in order to grab the market attention. Further, automation is when compared to the best choice for web-based applications as verifying and testing the application interfaces are comparatively easier than previous web applications.
Before we talk about Behaviour Driven Development (BDD), let’s have a look at why automated tests.
Improves speed
Better Test Coverage
Better efficiency
Boosts Developers & Testers Morale
Require less human resources
Cost efficient
What is BDD?
BDD is a methodology that is used to develop softwares/projects through example-based communication between developers, QA, Project Managers and business team.
The primary goal of BDD is to improve communication between business team by understanding functional requirements from all the members of development team to avoid ambiguities of requirements. This methodology helps in delivering software that assists in continuous communication, deliberate discovery and test-automation.
Why should we follow BDD methodology?
BDD is an extension of TDD (Test Driven Development). Like in TDD, in BDD also we test first and add application code. This is easy to describe using ubiquitous language.
Further, BDD follows example based communication process between teams, QA and business clients.
Example Based Communication
This helps business team and developers to clearly understand the clients requirement. BDD is largely facilitated through the use of domain specific language by using natural language constructs.
Gherkin
It represents “Defining behavior” writing features using gherkin language. Behat is a tool to test the behavior of your application which is described in a special language called gherkin especially for behavior descriptions.
Gherkin Structure
Behat is a BDD (Behavior Driven Development) for PHP framework for auto testing your business expectations. Behat is used to check the test cases written in Gherkin structure.
Example structure of Feature File:
Gherkin keywords and its descriptions as follows: Feature: This is a descriptive section of what is desired starts the feature and gives it a title. Behat doesn’t parse next 3 lines of text which specifies the context to the people reading your feature. Scenario: This is something like determinable business situation starts the scenario, it contains description of the scenario. Steps: Feature consists of steps known as Givens, Whens and Thens. Behat doesn’t technically differentiate between these three kind of steps.
Each feature file can contain single scenario to test the behavior of our application. Similarly, feature file can have multiple scenarios to test the behavior. Given: It defines the initial state of the system for the scenario. When: This describes the action taken by the person/role. Then: Describes the observable system state after the action has been performed.
And/But: Can be added to create multiples of Given/When/Then lines.
Prerequisites:
PHP higher than 5.3.5
Libraries should install “curl, mbstring, xml” (Behat is a library it can be easily installed by using composer)
Installing Behat
Step 1: Follow the commands in terminal to create composer.json file. Install behat in any path wherever you want... let’s say in root folder path.
If you want to test javascript testing with Selenium, you can install with selenium2-driver else it is not required to install.
Start using Behat in your project to call vendor/bin/behat --init. This will setup a features directory in behat directory.
Step 2: Open FeatureContext.php file under /behat/features/bootstrap after that run “init command” and add the below code snippet.
use Behat\Behat\Context\SnippetAcceptingContext;
use Behat\MinkExtension\Context\MinkContext;
Step 3: Extend your FeatureContext class with MinkContext and implement with SnippetAcceptingContext and Context.
Step 4: Now create a config file called behat.yml in behat directory
We should specify the base URL to test which instance by behat in behat.yml file.
Goutte driver will act as a bridge between behat and your business application.
Wd_host is nothing but the localhost URL so it can be 127.0.0.1:4444/wd/hub for Selenium integration with behat.
Note: If you want to test with Selenium integration, you should downgrade your Firefox version to 46. Selenium standalone server version should be 2.52.0 and your Firefox driver should be geckodriver version 0.17.0. Just download the zip file as it is enough to start the Selenium server.
Currently, Selenium integration successfully working in firefox version 46 with appropriate other Firefox drivers. If you want to test with Firefox means you should change browser_name: firefox in behat.yml file.
In feature file, we should mention “@javascript” before scenario starts then only it will recognize the Selenium server to start browser testing.
Starting Selenium server
Start Selenium server for javascript testing
java -Dwebdriver.GeckoDriver.driver="GeckoDriverdriver" -jar selenium-server-standalone-2.52.0.jar to start selenium server
(or)
java -jar selenium-server-standalone-2.44.0.jar
Don’t forget to specify @javascript in feature file to run Selenium testing.
Create feature file under /behat/features/login.feature
Once done with features, scenarios, and steps. Finally, run the feature file in terminal. Path should be your application where installed: vendor/bin/behat this will run all features scripts.
You can also run single feature file like vendor/bin/behat features/file_name.feature.
If you want to run the Selenium javascript testing with slow time monitoring you can do like this (scenario will be like “And I wait for 2”)
“iWaitFor” function is nothing but a step, which is defined in feature file like “And I wait for 2” number specified as in seconds.
Similarly, I have given example here for triggering “Enter” keyboard button.
vendor/bin/behat -dl this command will show all the list of behat availability options
Sample output:
We have covered the detailed descriptions of behavior driven development followed up by example based communication between teams, QA, and business clients. We also touched Gherkin structure, usage of behat tool and installation procedures. This should have give you the overall idea about javascript automation testing. Feel free to share your experiences and any issue you come across.
Below given is a presentation on "Behavior Driven Development".
Media and publishing companies typically run their business based on segmented revenue streams, like advertising, promotions etc. Here revenue stream often reports vertically to ensure the steep rise in global annual turnover. But could it be the right time for the publisher to focus on paid ARPU, subscription and memberships?
Thanks to major content players like Netflix, Amazon, Spotify and the other who have leveraged subscription model and proved users are willing to pay for the quality content. Perhaps, it’s high time when the publishing industry should focus more on fixing their business model and less on ways to further enhance the content production.
We are, in the simplest terms, a subscription-first business. Our focus on subscribers sets us apart in crucial ways from many other media organizations. We are not trying to maximize clicks and sell low-margin advertising against them. We are not trying to win a pageviews arms race. We believe that the more sound business strategy for The Times is to provide journalism so strong that several million people around the world are willing to pay for it.
With this, the question arises - how these organizations are increasing their ARPU with paid subscriptions, memberships, events, and lead generation.
First, publishers need to work on their content strategy and user experience to boost audience engagement. Simultaneously, they need to consider their audience base and figure out their goal. They can also think about - what is the unique value proposition you can offer to your client? And how to ensure they are aware of it?
Of course, here, introducing a premium model should be the first step, but more needs to be done if the transition is to be successful. Let’s have a look at four key areas publishing houses can work on.
Paid Subscription: Publishers are turning directly to site viewers for paid subscriptions rather than relying on advertising. Typically, a subscription model requires users to pay money to get access to a product or service. It works best when the company provides highly specialized and unique information that readers can’t find anywhere else.
Membership: A membership is the notion of belonging. It says nothing of cost or price, though most memberships end up having a cost to them. Membership programs are exclusive as it has tremendous benefits. Being a valuable member gets you access to other members – which may be the thing that is most valued.
Events: Hosting events to diversify revenue streams is nothing new. Often event organizers combine their resources showcasing the magazine’s content as part of a unique offering. A planned occasion not only provides an additional revenue stream but also increases subscribers base. According to the American Press Institute, “Incorporating an events strategy with publishing also strengthens a publisher’s brand and bottom line while deepening connections with audiences, sponsors, and advertisers.”
Lead Generation: It typically starts with data collection pushing prospects to landing pages and asking them to fill lead-gen collection forms to get free ebooks, whitepaper, and other resources. These data are used to better understand the leads, engage with them and enhance lead-gen programs. Here, email marketing helps publishers to reach their targeted audience.
Further, publishers can utilize proven online marketing methods, such as webinars, co-registration, affiliate partnerships, search engine optimization and others.
Are you interested in getting a subscription feature for your own website to provide services to your customers? Not only we help you in keeping track of customers and their subscriptions, but we also handle end-to-end Drupal management services. Get in touch with our experts to find out how you can use the subscription model to increase your ARPU.
Media and publishing companies typically run their business based on segmented revenue streams, like advertising, promotions etc. Here revenue stream often reports vertically to ensure the steep rise in global annual turnover. But could it be the right time for the publisher to focus on paid subscription and memberships?
Thanks to major content players like Netflix, Amazon, Spotify and the other who have leveraged subscription model and proved users are willing to pay for the quality content. Perhaps, it’s high time when the publishing industry should focus more on fixing their business model and less on ways to further enhance the content production.
We are, in the simplest terms, a subscription-first business. Our focus on subscribers sets us apart in crucial ways from many other media organizations. We are not trying to maximize clicks and sell low-margin advertising against them. We are not trying to win a pageviews arms race. We believe that the more sound business strategy for The Times is to provide journalism so strong that several million people around the world are willing to pay for it.
With this, the question arises - how these organizations are increasing their ARPU with paid subscriptions, memberships, events, and lead generation.
First, publishers need to work on their content strategy and user experience to boost audience engagement. Simultaneously, they need to consider their audience base and figure out their goal. They can also think about - what is the unique value proposition you can offer to your client? And how to ensure they are aware of it?
Of course, here, introducing a premium model should be the first step, but more needs to be done if the transition is to be successful. Let’s have a look at four key areas publishing houses can work on.
Paid Subscription: Publishers are turning directly to site viewers for paid subscriptions rather than relying on advertising. Typically, a subscription model requires users to pay money to get access to a product or service. It works best when the company provides highly specialized and unique information that readers can’t find anywhere else.
Membership: A membership is the notion of belonging. It says nothing of cost or price, though most memberships end up having a cost to them. Membership programs are exclusive as it has tremendous benefits. Being a valuable member gets you access to other members – which may be the thing that is most valued.
Events: Hosting events to diversify revenue streams is nothing new. Often event organizers combine their resources showcasing the magazine’s content as part of a unique offering. A planned occasion not only provides an additional revenue stream but also increases subscribers base. According to the American Press Institute, “Incorporating an events strategy with publishing also strengthens a publisher’s brand and bottom line while deepening connections with audiences, sponsors, and advertisers.”
Lead Generation: It typically starts with data collection pushing prospects to landing pages and asking them to fill lead-gen collection forms to get free ebooks, whitepaper, and other resources. These data are used to better understand the leads, engage with them and enhance lead-gen programs. Here, email marketing helps publishers to reach their targeted audience.
Further, publishers can utilize proven online marketing methods, such as webinars, co-registration, affiliate partnerships, search engine optimization and others.
Are you interested in getting a subscription feature for your own website to provide services to your customers? Not only we help you in keeping track of customers and their subscriptions, but we also handle end-to-end Drupal management services. Get in touch with our experts to find out how you can use the subscription model to increase your ARPU.
Having a hard time to find out a javascript that can help in displaying the stock and timeline charts on your web/mobile application. Recently, I was working on a Drupal project where clients requirement was to add a similar feature to their web application. While doing secondary research our team came across Highstock - a javascript library - that allows you to create general timeline charts and insert them on the website.
Have a look at what exactly is Highstock?
Highstock helps in displaying the stock and timeline charts for web/mobile application based on certain data. Highstock chart offers a wide range of features like basic navigator series, date range, date picker, scrolling bar. Still wondering how to use this feature to its fullest - integrate Drupal 8 Rest API with Highstock javascript library.
Integrating Drupal 8 REST API with Highstock javascript library.
Step 1: Create a custom module. In my case, I will be creating a module name Highstock.
Step 3: Create highstock.libraries.yml file to add highstock library.
Step 4: Create Rest API Resource, which provides the input for the chart.
Highstock accept the input in the following format: It requires the array structure, within that add x-axis, y-axis data with comma separated. So while creating REST API we need to generate the output in the following format.
Step 4.1: In Drupal 8, create HighstockChart.php file inside /src/Plugin/rest/resource.
Step 5: Create a highstock_chart.js file to integrate REST API output with a highstock library.
Highstock library provides various types of charts like single line series, line with marker and shadow, spline, step line, area spline etc. You can find the types of charts here https://www.highcharts.com/stock/demo.
In js, we have to call the API which gives you the JSON output. Based on type added for the chart output will be shown.
Step 6: Create a block HighstockChartBlock.php to show the chart.
Place the above block in any of your desired regions and it will display the chart like below:
Default JS provides the following properties:
Range selector
Data range
Scrollbar at the bottom.
Right side menu icon will provide an option to Print chart, download chart in PNG, JPEG, SVG, and PDF format.
Mouse Hovering will give Marker with x-axis and y-axis highlighted.
Properties of Highstock Chart:
Highstock javascript library provides several properties and methods to configure the chart. All these configurations can be found in highstock API reference. https://api.highcharts.com/highstock/.
We can modify charts using those properties. I have referred to the above link and configured my charts as mentioned below:
We must add the above properties in highstock_chart.js of your custom module. After applying all properties chart will look similar to the below image.
This API is very handy when it comes to representing Complex Data structures to end users in the form of colorful charts. You should definitely pitch this one to the clients if they are still showing Data in traditional Tables, Excel sheet, etc. Hope now you can easily integrate Drupal 8 Rest API with Highstock. If you have any suggestions or queries please comment down let me try to answer.
Async and Await are extensions of promises. So if you are not clear about the basics of promises please get comfortable with promises before reading further. You can read my post on Understanding Promises in Javascript.
I am sure that many of you would be using async and await already. But I think it deserves a little more attention. Here is a small test : If you can’t spot the problem with the below code then read on.
for (name of ["nkgokul", "BrendanEich", "gaearon"]) {
userDetails = await fetch("https://api.github.com/users/" + name);
userDetailsJSON = await userDetails.json();
console.log("userDetailsJSON", userDetailsJSON);
}
We will revisit the above code block later, once we have gone through async await basics. Like always Mozilla docs is your friend. Especially checkout the definitions.
async and await
From MDN
An asynchronous function is a function which operates asynchronously via the event loop, using an implicit Promise to return its result. But the syntax and structure of your code using async functions is much more like using standard synchronous functions.
I wonder who writes these descriptions. They are so concise and well articulated. To break it down.
The function operates asynchronously via event loop.
It uses an implicit Promise to return the result.
The syntax and structure of the code is similar to writing synchronous functions.
And MDN goes on to say
An async function can contain an await expression that pauses the execution of the async function and waits for the passed Promise's resolution, and then resumes the async function's execution and returns the resolved value. Remember, the await keyword is only valid inside async functions.
Let us jump into code to understand this better. We will reuse the three function we used for understanding promises here as well.
A function that returns a promise which resolves or rejects after n number of seconds.
Two more deterministic functions one which resolve after n seconds and another which rejects after n seconds.
var promiseTRSANSG = (promiseThatResolvesAfterNSecondsGenerator = function(
n = 0
) {
return new Promise(function(resolve, reject) {
setTimeout(function() {
resolve({
resolvedAfterNSeconds: n
});
}, n * 1000);
});
});
var promiseTRJANSG = (promiseThatRejectsAfterNSecondsGenerator = function(
n = 0
) {
return new Promise(function(resolve, reject) {
setTimeout(function() {
reject({
rejectedAfterNSeconds: n
});
}, n * 1000);
});
});
Since all these three functions are returning promises we can also call these functions as asynchronous functions. See we wrote asyn functions even before knowing about them.
If we had to use the function promiseTRSANSG using standard format of promises we would have written something like this.
var promise1 = promiseTRSANSG(3);
promise1.then(function(result) {
console.log(result);
});
There is a lot of unnecessary code here like anonymous function just for assigning the handlers. What async await does is it improves the syntax for this which would make seem more like synchronous code. If we had to the same as above in async await format it would be like
result = await promiseTRSANSG(3);
console.log(result);
Well that look much more readable than the standard promise syntax. When we used await the execution of the code was blocked. That is the reason that you had the value of the promise resolution in the variable result. As you can make out from the above code sample, instead of the .then part the result is assigned to the variable directly when you use await You can also make out that the .catch part is not present here. That is because that is handled using try catch error handling. So instead of using promiseTRSANSlet us use promiseTRRARNOSG Since this function can either resolve or reject we need to handle both the scenarios. In the above code we just wrote two lines to give you an easy comparison between the standard format and async await format. The example in next section gives you a better idea of the format and structure.
General syntax of using asyncawait
async function testAsync() {
for (var i = 0; i < 5; i++) {
try {
result1 = await promiseTRRARNOSG();
console.log("Result 1 ", result1);
result2 = await promiseTRRARNOSG();
console.log("Result 2 ", result2);
} catch (e) {
console.log("Error", e);
} finally {
console.log("This is done");
}
}
}
test();
From the above code example you can make out that instead of using the promise specific error handling we are using the more generic approach of using trycatch for error handling. So that is one thing less for us to remember and it also improves the overall readability even after considering the try catch block around our code. So based on the level of error handling you need you can add any number of catch blocks and make the error messages more specific and meaningful.
Pitfalls of using async and await
async await makes it much more easier to use promises. Developers from synchronous programming background will feel at home while using asyncand await. This should also alert us, as this also means that we are moving towards a more synchronous approach if we don’t keep a watch.
The whole point of javascript/nodejs is to think asynchronous by default and not an after though. async await generally means you are doing things in sequential way. So make a conscious decision whenever you want to use to async await.
Now let us start analysing the code that I flashed at your face in the beginning.
for (name of ["nkgokul", "BrendanEich", "gaearon"]) {
userDetails = await fetch("https://api.github.com/users/" + name);
userDetailsJSON = await userDetails.json();
console.log("userDetailsJSON", userDetailsJSON);
}
This seems like a harmless piece of code that fetches the github details of three users “nkgokul”, “BrendanEich”, “gaearon” Right. That is true. That is what this function does. But it also has some unintended consequences.
Before diving further into the code let us build a simple timer.
startTime = performance.now(); //Run at the beginning of the code
function executingAt() {
return (performance.now() - startTime) / 1000;
}
Now we can use executingAt wherever we want to print the number of seconds that have surpassed since the beginning.
async function fetchUserDetailsWithStats() {
i = 0;
for (name of ["nkgokul", "BrendanEich", "gaearon"]) {
i++;
console.log("Starting API call " + i + " at " + executingAt());
userDetails = await fetch("https://api.github.com/users/" + name);
userDetailsJSON = await userDetails.json();
console.log("Finished API call " + i + "at " + executingAt());
console.log("userDetailsJSON", userDetailsJSON);
}
}
Checkout the output of the same.
async-await analysed
As you can find from the output, each of the await function is called after the previous function was completed. We are trying to fetch the details of three different users“nkgokul”, “BrendanEich”, “gaearon” It is pretty obvious that output of one API call is in noway dependent on the output of the others.
The only dependence we have is these two lines of code.
We can create the userDetailsJSON object only after getting the userDetails. Hence it makes sense to use await here that is within the scope of getting the details of a single user. So let us make an async for getting the details of the single user.
async function fetchSingleUsersDetailsWithStats(name) {
console.log("Starting API call for " + name + " at " + executingAt());
userDetails = await fetch("https://api.github.com/users/" + name);
userDetailsJSON = await userDetails.json();
console.log("Finished API call for " + name + " at " + executingAt());
return userDetailsJSON;
}
Now that the fetchSingleUsersDetailsWithStats is async we can use this function to fetch the details of the different users in parallel.
async function fetchAllUsersDetailsParallelyWithStats() {
let singleUsersDetailsPromises = [];
for (name of ["nkgokul", "BrendanEich", "gaearon"]) {
let promise = fetchSingleUsersDetailsWithStats(name);
console.log(
"Created Promise for API call of " + name + " at " + executingAt()
);
singleUsersDetailsPromises.push(promise);
}
console.log("Finished adding all promises at " + executingAt());
let allUsersDetails = await Promise.all(singleUsersDetailsPromises);
console.log("Got the results for all promises at " + executingAt());
console.log(allUsersDetails);
}
When you want to run things in parallel, the thumb rule that I follow is
Create a promise for each async call. Add all the promises to an array. Then pass the promises array to Promise.all This in turn returns a single promise for which we can use await
When we put all of this together we get
startTime = performance.now();
async function fetchAllUsersDetailsParallelyWithStats() {
let singleUsersDetailsPromises = [];
for (name of ["nkgokul", "BrendanEich", "gaearon"]) {
let promise = fetchSingleUsersDetailsWithStats(name);
console.log(
"Created Promise for API call of " + name + " at " + executingAt()
);
singleUsersDetailsPromises.push(promise);
}
console.log("Finished adding all promises at " + executingAt());
let allUsersDetails = await Promise.all(singleUsersDetailsPromises);
console.log("Got the results for all promises at " + executingAt());
console.log(allUsersDetails);
}
async function fetchSingleUsersDetailsWithStats(name) {
console.log("Starting API call for " + name + " at " + executingAt());
userDetails = await fetch("https://api.github.com/users/" + name);
userDetailsJSON = await userDetails.json();
console.log("Finished API call for " + name + " at " + executingAt());
return userDetailsJSON;
}
fetchAllUsersDetailsParallelyWithStats();
The output for this is
Promises run in parallel with timestamps
As you can make out from the output, promise creations are almost instantaneous whereas API calls take some time. We need to stress this as time taken for promises creation and processing is trivial when compared to IO operations. So while choosing a promise library it makes more sense to choose a library that is feature rich and has better dev experience. Since we are using Promise.all all the API calls were run in parallel. Each API is taking almost 0.88 seconds. But since they are called in parallel we were able to get the results of all API calls in 0.89 seconds.
In most of the scenarios understanding this much should serve us well. You can skip to Thumb Rules section. But if you want to dig deeper read on.
Digging deeper into await
For this let us pretty much limit ourselves to promiseTRSANSG function. The outcome of this function is more deterministic and will help us identify the differences.
asynchronous execution starts as soon as the promise is created. await just blocks the code within the async function until the promise is resolved. Let us create a function which will help us clearly understand this.
var concurrent = async function() {
startTime = performance.now();
const resolveAfter3seconds = promiseTRSANSG(3);
console.log("Promise for resolveAfter3seconds created at ", executingAt());
const resolveAfter4seconds = promiseTRSANSG(4);
console.log("Promise for resolveAfter4seconds created at ", executingAt());
resolveAfter3seconds.then(function(){
console.log("resolveAfter3seconds resolved at ", executingAt());
});
resolveAfter4seconds.then(function(){
console.log("resolveAfter4seconds resolved at ", executingAt());
});
console.log(await resolveAfter4seconds);
console.log("await resolveAfter4seconds executed at ", executingAt());
console.log(await resolveAfter3seconds);
console.log("await resolveAfter3seconds executed at ", executingAt());
};
concurrent();
Concurrent start and then await
From previous post we know that .then is even driven. That is .then is executed as soon as the promise is resolved. So let us use resolveAfter3seconds.thenand resolveAfter4seconds.then to identify when our promises are actually resolved. From the output we can see that resolveAfter3seconds is resolved after 3 seconds and resolveAfter4secondsis executed after 4 seconds. This is as expected.
Now to check how await affects the execution of code we have used
As we have seen from the output of .thenresolveAfter3seconds resolved one second before resolveAfter4seconds . But we have the await for resolveAfter4seconds and then followed by await for resolveAfter3seconds
From the output we can see that though resolveAfter3seconds was already resolved it got printed only after the output of console.log(await resolveAfter4seconds); was printed. Which reiterates what we had said earlier. await only blocks the execution of next lines of code in asyncfunction and doesn’t affect the promise execution.
Disclaimer
MDN documentation mentions that Promise.all is still serial and using .then is truly parallel. I have not been able to understand the difference and would love to hear back if anybody has figured out their heard around the difference.
Thumb Rules
Here are a list of thumb rules I use to keep my head sane around using asyncand await
aync functions returns a promise.
async functions use an implicit Promise to return its result. Even if you don’t return a promise explicitly async function makes sure that your code is passed through a promise.
await blocks the code execution within the async function, of which it(await statement) is a part.
There can be multiple await statements within a single async function.
When using async await make sure to use try catch for error handling.
If your code contains blocking code it is better to make it an asyncfunction. By doing this you are making sure that somebody else can use your function asynchronously.
By making async functions out of blocking code, you are enabling the user who will call your function to decide on the level of asynhronicity he wants.
Be extra careful when using await within loops and iterators. You might fall into the trap of writing sequentially executing code when it could have been easily done in parallel.
await is always for a single promise. If you want to await multiple promises(Run this promises in parallel) create an array of promises and then pass it to the Promise.all function.
Promise creation starts the execution of asynchronous functionality.
await only blocks the code execution within the async function. It only makes sure that next line is executed when the promise resolves. So if an asynchronous activity has already started then await will not have an effect on it.
Please point out if I am missing something here or if something can be improved.
The last few months have been quite challenging for media & publishing enterprises dealing with EU’s new data privacy law - GDPR and Drupal highly critical vulnerability - DrupalGeddon 2.
On 28 March, Drupal announced the alerts about DrupalGeddon 2 (SA-CORE-2018-002 / CVE-2018-7600) - which was later patched by the security team. The vulnerability was potential enough to affect the vast majority of Drupal 6, 7 and 8 websites.
Earlier in October 2014, Drupal faced similar vulnerability - tagged as DrupalGeddon. At that time, the security patch was released within seven hours of the critical security update.
So here the question is - how vulnerable is Drupal?
In short, we can’t specify exactly how vulnerable is Drupal as it entirely depends on the context. Possibly, you will find the answer to this question in one of our previous post where we talked about “Drupal Security Advisor Data”.
Implement these measures to secure your Drupal website
1. Upgrade to the latest version of Drupal
Whether it is your operating system, antivirus or Drupal itself, running the latest version is always suggested. And this is the least you can and should do to protect your website.
The updates not only bring new features but also enhances security. Further, you should also keep updating modules as it is most often the cause of misery. It's always recommended to check for the update report and keep updating at regular interval. The latest version is Drupal 8.3.1.
Note that it is older versions of CMS that hackers usually target as they are more vulnerable.
2. Remove unnecessary modules
Agreed that the modules play a critical role in enhancing user experience. However, you should be wary of downloading as it increases vulnerability. Also, ensure that the module has a sizable number of downloads.
In case even if some vulnerability occurs, it will be resolved quickly by the community as it can affect a major chunk of companies/individuals. Furthermore, you can remove/uncheck the unused modules or completely uninstall it.
3. Practice strong user management
In a typical organization, several individuals require access to the website to manage different areas within it. These users are potential enough to be a reason for security breach so it is important to keep control of their permissions.
Give limited access to the site, instead of giving access to the whole site by default. And when the user leaves the organization they should be promptly removed from the administrator list to eliminate any unnecessary risk. Read on for a quick review “managing user roles & permission in Drupal 8”.
4. Choose a proper hosting provider
It's always a dilemma to figure out - which hosting provider should we trust for our website? Not to mention hosting provider plays a key role in ensuring the security of the website. Look for a hosting provider, which offers a security-first Drupal hosting solution with all the server side security measure like SSL.
5. Enable HTTPS
As a core member of the development team/business owner/decision makers, it's your responsibility to take the ownership of the security of your enterprise website.
Consider performing a check for common vulnerabilities at regular interval as it will allow you to make quick work of those holes by following the prompts. Here is what Drupal experts have to say about "securing users private data from unauthorized access".
6. Backup regularly
Plan for the worst. Keep your codebase and database handy. There can be a number of reasons, both accidental and intentional, that can destroy your hard work. Here is the list of reasons why you should regularly backup your website.
General safety
The original version of your site has aged
Respond quickly if your site is hacked
Updates went wrong
To sum up, you need to follow the above-mentioned steps in order to secure your Drupal website. Also, reporting a security breach to the Drupal community can be an effective way to patch the issue and seek help from the community to avoid massive risk.
Well the title was a hyperbole. Now that I have got your attention let us get started. It might be stretch as of today that we can kill twitter. But in this post I would like to show that it many not be impossible after all, at-least in a couple of years.
A few things to know before we start killing twitter.
It starts with realising the you are doing a favour to twitter and twitter is not doing a favour to you. Yes I agree that twitter has been a great tool and it even led to many Arab Spring. Checkout Social Media Made the Arab Spring, But Couldn't Save Itforfurther details.
But we need to realise that while these are the pleasant side-effects of twitter/social media, for a service or business to be sustainable it has to be profitable or at-least should have the profit generating potential in the future. Irrespective of whether the services is following a ad revenue based model or freemium model one thing is in common. Either you have to pay up for the services or the service needs to sell something to somebody.
Understanding what is that something that is sold and to whom it is sold is important.
Let us start with the most quoted quote regarding the free services or seemingly free services.
Most of the social media users forget the value they are adding to the networks. It is easier for us to see a blog post or a video as as data/content. But we fail to realise that even the short status updates that we do on and our comments on them in social media websites are also content.
Every action that we do on social media is valuable and it adds to the valuation of the platform. How much is that action valued and how is it valued requires a detailed analysis (Will be following up this post with couple of related posts about this topic). But for now let us understand this much.
Every action that we do on a social media website falls into one of the following categories.
Content creation
Content curation
Content Distribution
Training the AI models.
I have tried to highlight the same in this tweet of mine.
It is difficult for people to understand this as they cannot see it clearly or rather there is no way for them to understand this. It only becomes clear in some conversations like the following. In February this year when aantonop was complaining about how facebook was locking him out, one of the users mentioned this.
Anton’s reply was interesting.
So it brings us to the question who is benefitting from whom. Is the platform benefitting from the user or is the user benefitting from the platform. At the least it is a synergy between the platform and user. At worst the platform is ripping of your data and making a hell lot of money while not rewarding you in anyway.
What is your data worth?
Data and the value it creates is has different lifetimes and there are lot of overlaps. So it is difficult to put a value to it. Let use a very crude way to identify the average minimum value of our data on Facebook. Facebook is valued at 600 Billion USD today. There are around 2 billion users on Facebook. Since Facebook makes money primarily by showing ads or/and selling your data :P , data created by each user should be worth at-least 300 USD.
One thing that everybody seems to agree is that data is the new oil and it is valuable. But what most of us fail to understand is that oil has a single lifecycle but whereas data has multiple life-cycles. So any valuation you put to a data piece is only a moving value that is affected by various parameters. We also need to realise that data that we consider archived or stale also revenue generating potential in the future. AI models will need a lot of data going forward and will unlock the revenue generating potential of your data. In the following article you can checkout how Bottos and Databroker DAO or unlocking the potential of data from various sources.
The two ways to realise the true value of your data
There are two ways you will realise that your data is worth something.
One : Have somebody like Zuck sell your data and make billions in the process.
Two : look at the real money people make with data.
1. When your data is sold
Cambridge Analytics expose happened on March 17, 2018. This expose made it clear that the user targeting is not just for ads and can be used for much more. There were serious concerns about users’ privacy. The expose once again proved that privacy is dead. What is more disturbing is that experts expressed that this might have a serious effect on Facebook’s future valuations. But that turned out to be completely false. Can you spot the dip in Facebook marketcap because of this scandal? I have highlighted this in red circle for you towards the right end of the graph. This is what I would call “A major dip in the short term but a minor blip in the long term”. The quick correction back to the trend line only suggests that nobody takes privacy seriously any more.
Facebook Marketcap
2. When you look at real money people make with your data
I am sure that Andreas M. Antonopoulos knows the value of data. I am just taking this example as it was a high profile case where data created elsewhere was able to generate revenues in some other platform because of the data distribution. The interesting thing is that in this case the money made was being used for translating aantonop’s videos to other language. You can read more about it here.
The real aanntonop
Aantonop made the above post which can be called as “Proof of Identity” post verifying that he is the real aantonop. The post gathered a lot of attention and has rewards of 1449 USD. I just hope that Aantonop claims the amount one day and starts using Steem more frequently.
I took Aantonop’s example because he is very popular in the world of Bitcoin and his videos have helped many entrepreneurs to take a plunge into Bitcoin. His videos are proof that well made content has a long shelf life and has revenue generating potential even outside the platforms that the content was created in.
Now lets gets back to our original question.
How to kill twitter?
This might seem like an impossible proposition to many. Let us look at the reasons why it is difficult to kill twitter or facebook for that matter.
I don’t need another social network.
I first got to know about Robert Scoble from Google+ days. I invited him to checkout Steemit platform and he replied with “I don’t need another social network.” Today we are in age where we have a social media overload. The new social media platforms needs to cross the critical mass for all the others to follow up. Replacing Facebook might be impossible for the next few years but we might have a chance to replace twitter with a decentralised version. Facebook has too much of a lead. It has your photos, videos, friends, memories, groups and pages. Any new entrant needs to address all these to overcome Facebook. Whereas with respect to twitter a limited feature set with additional benefits should be able to sway the needle in the new entrant’s favour.
So for now let us assume given enough motivation users might consider shifting to the new platform.
Twitter has first mover advantage
Twitter is huge. Twitter has first mover advantage. Yes that might be the case. But last year has proven the with right incentive models you can have a jumpstart. Binance became the fastest unicorn in history.
So don’t be surprised if a new entrant replaces bitter in less than a year.
Show me the money
Attributing a value to content is a tough task. There have many unsuccessful attempts in the past. I think Steem blockchain has come further than any other attempts. By incentivising both content creation and content curation steem has figured out a subjective way to attribute value to content. With the release of SMTs later this year the community will only get better at arriving at closer estimations for the value of posts. When people were told that their content is worth something they were not able to relate to it. With platforms like Steem having put definitive value to content and having paid the same to the content creators (which many have en-cashed to to FIAT) the idea is more palpable now. Monetary incentives can do wonders and as more people get to know about these platforms the effect will only get compounded.
Hitting the critical mass
To be a serious contender to twitter the new platform needs to hit the critical mass. This can be the real challenge. So here are the things that can be done.
Create a distributed cryptocurrency on the lines of Steem (Especially the rewards mechanism part.) Keep the interface, UX and restrictions(like number of characters) very similar to twitter. So that people feel at home ;)
In addition to the normal account creation have a preserved namespace twitter-[twitter-handle]. This will be reserved for creating one to one mapping of user accounts from twitter to the new blockchain.
The user accounts for each user on twitter are also created on the new platform. Both username and passwords(private keys) will be created. Twitter users can claim their password by sending a tweet to twitter handle of new blockchain. The password or private keys will be dm’ed to users.
Since all tweets are public duplicate them in the new platform under the users accounts. If that is a stretch then it can be started with latest tweets of popular accounts and then it can be expanded slowly.
The beta users will have access to popular content on the new platform. Their retweets and likes of tweets will decide the value of the new tweets mirrors from twitter.
While users might be hesitant to create new accounts I think there will be very few people who will not be happy to claim their accounts. Especially when they know that there are rewards waiting for them to en-cash for the content they have created.
The incentive or the rewards to be received on the new platform will be bigger for the users with huge number of followers. (Assuming that their content is also liked by the beta users on the new platform). So if these influencers move to the new platform, they will also bring along at-least some part of their followers.
Considering that the content on blockchain will be censor resistant and it rewards good content the platform should be able to take of hit the critical mass very soon.
I am not sure what will be the legal issues surrounding an attempt like these. But I think this is something definitely worth trying. A few crypto-millionaires coming together should have enough funds to try something like this. What do you think? Will an attempt like this work? Share your thoughts.
I am making you a pinky promise that by the end of this post you will know JavaScript Promises better.
I have had a kind of “love and hate” relationship with JavaScript. But nevertheless JavaScript was always intriguing for me. Having worked on Java and PHP for the last 10 years, JavaScript seemed very different but intriguing. I did not get to spend enough time on JavaScript and have been trying to make up for it of late.
Promises was the first interesting topic that I came across. Time and again I have heard people saying that Promises saves you from Callback hell. While that might have been a pleasant side-effect, there is more to Promises and here is what I have been able to figure out till now.
Background
When you start working on JavaScript for the first time it can be a little frustrating. You will hear some people say that JavaScript is synchronous programming language while others claim that it is asynchronous. You hear blocking code, non blocking code, event driven design pattern, event life cycle, function stack, event queue, bubbling, polyfill, babel, angular, reactJS, vue JS and a ton of other tools and libraries. Fret not. You are not the first. There is a term for that as well. It is called JavaScript fatigue. You should check out the following article. There is a reason this post got 42k claps on Hackernoon :)
JavaScript is a synchronous programming language. But thanks to callback functions we can make it function like Asynchronous programming language.
Promises for layman
Promises in JavaScript are very similar to a promise in real life. So first let us look at promises in real life first. The definition of a promise from the dictionary is as follows
promise : noun : Assurance that one will do something or that a particular thing will happen.
So what happens when somebody makes you a promise?
A promise gives you an assurance that something will be done. Whether they(who made the promise) will do it themselves or will they get it done by others is immaterial. They give you an assurance based on which you can plan something.
A promise can either be kept or broken.
When a promise is kept you expect something out of that promise which you can make use of for your further actions or plans.
When a promise is broken, you would like to know why the person who made the promise was not able to keep up his side of the bargain. Once you know the reason and have a confirmation that the promise has been broken you can plan what to do next or how to handle it.
At the time of making a promise all we have is only an assurance. We will not be able to act on it immediately. We can decide and formulate what needs to be done when the promise is kept (and hence we have expected outcome) or when the promise is broken (we know the reason and hence we can plan a contingency).
There is a chance that you may not hear back from the person who made the promise at all. In such cases you would prefer to keep a time threshold. Say if the person who made the promise doesn’t come back to me in 10 days I will consider that he had some issues and will not keep up his promise. So even if the person comes back to you after 15 days it doesn’t matter to you any more as you have already made alternate plans.
Promises in JavaScript
As a rule of thumb for JavaScript I always read documentation from MDN Web Docs. Of all the resources I think they provide the most concise details. I read up the Promises page form MDSN Web Docs and played around with code to get a hang of it.
There are two parts to understanding promises. Creation of promises and Handling of promises. Though most of our code will generally cater to handling of promises created by other libraries a complete understanding will help us for sure and understanding of creation of promises is equally important once you cross the beginner stage.
Creation of Promises
Let us look at the signature for creating a new promise.
The constructor accepts a function called executor. This executor function accepts two parameters resolve and reject which are in turn functions. Promises are generally used for easier handling of asynchronous operations or blocking code, examples for which being file operations, API calls, DB calls, IO calls etc.The initiation of these asynchronous operations are initiated within the executorfunction. If the asynchronous operations are successful then the expected result is returned by calling the resolvefunction by the creator of the promise. Similarly if there was some unexpected error the reasons is passed on by calling the rejectfunction.
Now that we know how to create a promise. Let us create a simple promise for our understanding sake.
var keepsHisWord;
keepsHisWord = true;
promise1 = new Promise(function(resolve, reject) {
if (keepsHisWord) {
resolve("The man likes to keep his word");
} else {
reject("The man doesnt want to keep his word");
}
});
console.log(promise1);
Every promise has a state and value
Since this promise gets resolved right away we will not be able to inspect the initial state of the promise. So let us just create a new promise that will take some time to resolve. The easiest way for that is to use the setTimeOut function.
promise2 = new Promise(function(resolve, reject) {
setTimeout(function() {
resolve({
message: "The man likes to keep his word",
code: "aManKeepsHisWord"
});
}, 10 * 1000);
});
console.log(promise2);
The above code just creates a promise that will resolve unconditionally after 10 seconds. So we can checkout the state of the promise until it is resolved.
state of promise until it is resolved or rejected
Once the ten seconds are over the promise is resolved. Both PromiseStatus and PromiseValue are updated accordingly. As you can see we updated the resolve function so that we can pass a JSON Object instead of a simple string. This is just to show that we can pass other values as well in the resolve function.
A promise that resolves after 10 seconds with a JSON object as returned value
Now let us look at a promise the will reject. Let us just modify the promise 1 a little for this.
keepsHisWord = false;
promise3 = new Promise(function(resolve, reject) {
if (keepsHisWord) {
resolve("The man likes to keep his word");
} else {
reject("The man doesn't want to keep his word");
}
});
console.log(promise3);
Since this will create a unhanded rejection chrome browser will show an error. You can ignore it for now. We will get back to that later.
rejections in promises
As we can see PromiseStatus can have three different values. pendingresolved or rejected When promise is created PromiseStatuswill be in the pending status and will have PromiseValue as undefined until the promise is either resolved or rejected. When a promise is in resolved or rejected states, a promise is said to be settled. So a promise generally transitions from pending state to settled state.
Now that we know how promises are created we can look at how we can use or handle promises. This will go hand in hand with understanding the Promise object.
Understanding promises Object
As per MDN documentation
The Promise object represents the eventual completion (or failure) of an asynchronous operation, and its resulting value.
Promise object has static methods and prototype methodsStatic methods in Promise object can be applied independently, whereas the prototype methods needs to be applied on the instances of Promise object. Remembering that both normal methods and prototypes all return a Promise makes it much easier to make sense of things.
Prototype Methods
Let us first start with the prototype methods There are three of them. Just to reiterate remember that all these methods can be applied on an instance of Promise object and all these methods return a promise in turn. All the following methods assigns handlers for different state transitions of a promise. As we saw earlier when a Promise is created it is in pending state. One or more of the following three methods will be run when a promise is settled based on whether they are fulfilled or rejected.
Promise.prototype.catch(onRejected)
Promise.prototype.then(onFulfilled, onRejected)
Promise.prototype.finally(onFinally)
The below image shows the flow for .then and .catch methods. Since they return a Promise they can be chained again which is also shown in the image. If .finally is declared for a promise then it will be executed whenever a promise is settled irrespective of whether it is fulfilled or rejected.
Here is a small story. You are a school going kid and you ask your mom for a phone. She says “I will buy a phone for this month end.”
Let us look at how it will look in JavaScript if the promise gets executed at the end of the month.
var momsPromise = new Promise(function(resolve, reject) {
momsSavings = 20000;
priceOfPhone = 60000;
if (momsSavings > priceOfPhone) {
resolve({
brand: "iphone",
model: "6s"
});
} else {
reject("We donot have enough savings. Let us save some more money.");
}
});
momsPromise.then(function(value) {
console.log("Hurray I got this phone as a gift ", JSON.stringify(value));
});
momsPromise.catch(function(reason) {
console.log("Mom coudn't buy me the phone because ", reason);
});
momsPromise.finally(function() {
console.log(
"Irrespecitve of whether my mom can buy me a phone or not, I still love her"
);
});
The output for this will be.
moms failed promise.
If we change the value of momsSavings to 200000 then mom will be able to gift the son. In such case the output will be
mom keeps her promise.
Let us wear the hat of somebody who consumes this library. We are mocking the output and nature so that we can look at how to use then and catch effectively.
Since .then can assign bothonFulfilled, onRejected handlers , instead of writing separate .then and .catch we could have done the same with with .then It would have looked like below.
momsPromise.then(
function(value) {
console.log("Hurray I got this phone as a gift ", JSON.stringify(value));
},
function(reason) {
console.log("Mom coudn't buy me the phone because ", reason);
}
);
But for readability of the code I think it is better to keep them separate.
To make sure that we can run all these samples in browsers in general or chrome in specific I am making sure that we do not have external dependencies in our code samples. To better understand the further topics let us create a function that will return a promise which will be resolved or rejected randomly so that we can test out various scenarios. To understand the concept of asynchronous functions let us introduce a random delay also into our function. Since we will need random numbers let us first create a random function that will return a random number between x and y.
function getRandomNumber(start = 1, end = 10) {
//works when both start,end are >=1 and end > start
return parseInt(Math.random() * end) % (end-start+1) + start;
}
Let us create a function that will return a promise for us. Let us call for our function promiseTRRARNOSG which is an alias for promiseThatResolvesRandomlyAfterRandomNumnberOfSecondsGenerator. This function will create a promise which will resolve or reject after a random number of seconds between 2 and 10. To randomise rejection and resolving we will create a random number between 1 and 10. If the random number generated is greater 5 we will resolve the promise, else we will reject it.
function getRandomNumber(start = 1, end = 10) {
//works when both start and end are >=1
return (parseInt(Math.random() * end) % (end - start + 1)) + start;
}
var testProimse = promiseTRRARNOSG();
testProimse.then(function(value) {
console.log("Value when promise is resolved : ", value);
});
testProimse.catch(function(reason) {
console.log("Reason when promise is rejected : ", reason);
});
// Let us loop through and create ten different promises using the function to see some variation. Some will be resolved and some will be rejected.
for (i=1; i<=10; i++) {
let promise = promiseTRRARNOSG();
promise.then(function(value) {
console.log("Value when promise is resolved : ", value);
});
promise.catch(function(reason) {
console.log("Reason when promise is rejected : ", reason);
});
}
Refresh the browser page and run the code in console to see the different outputs for resolve and reject scenarios. Going forward we will see how we can create multiple promises and check their outputs without having to do this.
Static Methods
There are four static methods in Promise object.
The first two are helpers methods or shortcuts. They help you create resolved or rejected promises easily.
Promise.reject(reason)
Helps you create a rejected promise.
var promise3 = Promise.reject("Not interested");
promise3.then(function(value){
console.log("This will not run as it is a resolved promise. The resolved value is ", value);
});
promise3.catch(function(reason){
console.log("This run as it is a rejected promise. The reason is ", reason);
});
Promise.resolve(value)
Helps you create a resolved promise.
var promise4 = Promise.resolve(1);
promise4.then(function(value){
console.log("This will run as it is a resovled promise. The resolved value is ", value);
});
promise4.catch(function(reason){
console.log("This will not run as it is a resolved promise", reason);
});
On a sidenote a promise can have multiple handlers. So you can update the above code to
var promise4 = Promise.resolve(1);
promise4.then(function(value){
console.log("This will run as it is a resovled promise. The resolved value is ", value);
});
promise4.then(function(value){
console.log("This will also run as multiple handlers can be added. Printing twice the resolved value which is ", value * 2);
});
promise4.catch(function(reason){
console.log("This will not run as it is a resolved promise", reason);
});
And the output will look like.
The next two methods helps you process a set of promises. When you are dealing with multiple promises it is better to create an array of promises first and then do the necessary action over the set of promises. For understanding these methods we will not be able to use our handy promiseTRRARNOSG as it is too random. It is better to have some deterministic promises so that we can understand the behaviour. Let us create two functions. One that will resolve after n seconds and one that will reject after n seconds.
var promiseTRSANSG = (promiseThatResolvesAfterNSecondsGenerator = function(
n = 0
) {
return new Promise(function(resolve, reject) {
setTimeout(function() {
resolve({
resolvedAfterNSeconds: n
});
}, n * 1000);
});
});
var promiseTRJANSG = (promiseThatRejectsAfterNSecondsGenerator = function(
n = 0
) {
return new Promise(function(resolve, reject) {
setTimeout(function() {
reject({
rejectedAfterNSeconds: n
});
}, n * 1000);
});
});
Now let us use these helper functions to understand Promise.All
Promise.All
As per MDN documentation
The Promise.all(iterable) method returns a single Promise that resolves when all of the promises in the iterable argument have resolved or when the iterable argument contains no promises. It rejects with the reason of the first promise that rejects.
Case 1 : When all the promises are resolved. This is the most frequently used scenario.
console.time("Promise.All");
var promisesArray = [];
promisesArray.push(promiseTRSANSG(1));
promisesArray.push(promiseTRSANSG(4));
promisesArray.push(promiseTRSANSG(2));
var handleAllPromises = Promise.all(promisesArray);
handleAllPromises.then(function(values) {
console.timeEnd("Promise.All");
console.log("All the promises are resolved", values);
});
handleAllPromises.catch(function(reason) {
console.log("One of the promises failed with the following reason", reason);
});
All promises resolved.
There are two important observations we need to make in general from the output.
First : The third promise which takes 2 seconds finishes before the second promise which takes 4 seconds. But as you can see in the output, the order of the promises are maintained in the values.
Second : I added a console timer to find out how long Promise.All takes. If the promises were executed in sequential it should have taken 1+4+2=7 seconds in total. But from our timer we saw that it only takes 4 seconds. This is a proof that all the promises were executed in parallel.
Case 2 : When there are no promises. I think this is the least frequently used.
console.time("Promise.All");
var promisesArray = [];
promisesArray.push(1);
promisesArray.push(4);
promisesArray.push(2);
var handleAllPromises = Promise.all(promisesArray);
handleAllPromises.then(function(values) {
console.timeEnd("Promise.All");
console.log("All the promises are resolved", values);
});
handleAllPromises.catch(function(reason) {
console.log("One of the promises failed with the following reason", reason);
});
Since there are no promises in the array the returning promise is resolved.
Case 3 : It rejects with the reason of the first promise that rejects.
console.time("Promise.All");
var promisesArray = [];
promisesArray.push(promiseTRSANSG(1));
promisesArray.push(promiseTRSANSG(5));
promisesArray.push(promiseTRSANSG(3));
promisesArray.push(promiseTRJANSG(2));
promisesArray.push(promiseTRSANSG(4));
var handleAllPromises = Promise.all(promisesArray);
handleAllPromises.then(function(values) {
console.timeEnd("Promise.All");
console.log("All the promises are resolved", values);
});
handleAllPromises.catch(function(reason) {
console.timeEnd("Promise.All");
console.log("One of the promises failed with the following reason ", reason);
});
Execution stopped after the first rejection
Promise.race
As per MDN documention
The Promise.race(iterable) method returns a promise that resolves or rejects as soon as one of the promises in the iterable resolves or rejects, with the value or reason from that promise.
Case 1 : One of the promises resolves first.
console.time("Promise.race");
var promisesArray = [];
promisesArray.push(promiseTRSANSG(4));
promisesArray.push(promiseTRSANSG(3));
promisesArray.push(promiseTRSANSG(2));
promisesArray.push(promiseTRJANSG(3));
promisesArray.push(promiseTRSANSG(4));
var promisesRace = Promise.race(promisesArray);
promisesRace.then(function(values) {
console.timeEnd("Promise.race");
console.log("The fasted promise resolved", values);
});
promisesRace.catch(function(reason) {
console.timeEnd("Promise.race");
console.log("The fastest promise rejected with the following reason ", reason);
});
fastest resolution
All the promises are run in parallel. The third promise resolves in 2 seconds. As soon as this is done the promise returned by Promise.race is resolved.
Case 2: One of the promises rejects first.
console.time("Promise.race");
var promisesArray = [];
promisesArray.push(promiseTRSANSG(4));
promisesArray.push(promiseTRSANSG(6));
promisesArray.push(promiseTRSANSG(5));
promisesArray.push(promiseTRJANSG(3));
promisesArray.push(promiseTRSANSG(4));
var promisesRace = Promise.race(promisesArray);
promisesRace.then(function(values) {
console.timeEnd("Promise.race");
console.log("The fasted promise resolved", values);
});
promisesRace.catch(function(reason) {
console.timeEnd("Promise.race");
console.log("The fastest promise rejected with the following reason ", reason);
});
fastest rejection
All the promises are run in parallel. The fourth promise rejected in 3 seconds. As soon as this is done the promise returned by Promise.race is rejected.
I have written all the example methods so that I can test out various scenarios and tests can be run in the browser itself. That is the reason you don’t see any API calls, file operations or database calls in the examples. While all of these are real life example you need additional effort to set them up and test it. Whereas using the delay functions gives you similar scenarios without the burden of additional setup. You can easily play around with the values to see and checkout different scenarios. You can use the combination of promiseTRJANSG, promiseTRSANSG and promiseTRRARNOSG methods to simulate enough scenarios for a thorough understanding of promises. Also use of console.time methods before and after relevant blocks will help us identify easily if the promises are run parallelly or sequentially . Let me know if you have any other interesting scenarios or if I have missed something. If you want all the code samples in a single place check out this gist.
Bluebird has some interesting features like
Promise.prototype.timeout
Promise.some
Promise.promisify
We will discuss these in a separate post.
I will also be writing one more post about my learnings from async and await.
Before closing I would like to list down all the thumb rules I follow to keep my head sane around promises.
Use promises whenever you are using async or blocking code.
resolve maps to then and reject maps to catch for all practical purposes.
Make sure to write both .catch and .then methods for all the promises.
If something needs to be done in both the cases use .finally
We only get one shot at mutating each promise.
We can add multiple handlers to a single promise.
The return type of all the methods in Promise object whether they are static methods or prototype methods is again a Promise
In Promise.all the order of the promises are maintained in values variable irrespective of which promise was first resolved.
CoinMarketCap(CoinMarketCap) has some global charts which help you get insights into the overall cryptocurrency markets. You can find them on https://coinmarketcap.com/charts/. I was particularly interested in the dominance chart as I was trying to analyze how Bitcoin and Altcoin dominance affects markets and what role it played in important dates in the last year.
Recently when I was trying to do sampling from the above Graph for an article about “Bitcoin Dominance and the Rise of Others” -https://medium.com/@gokulnk/bitcoin-dominance-and-the-emergence-of-others-64a7996272ad it was taking a lot of time to get data and it was really irritating. I had to mouseover on the graph, copy the data manually into the Medium article I was writing. I was using MAC split-screen for the same and it was not easy to switch the focus between the split screens. It only added to the frustration. Comment if you know how to do it.
So I set out to write a small script to fetch the data. Though the script took a little longer than I expected, I think I will save a lot of time going forward whenever I want to do sampling. I am putting out the script so that others also can use it.
Just visit the page https://coinmarketcap.com/charts/ and copy-paste the following code into the console to get the relevant data. You can also edit coinsForDominanceand datesForDominancevariables to get the data that you need.