Integrating Drupal 8 REST API with Highstock

Having a hard time to find out a javascript that can help in displaying the stock and timeline charts on your web/mobile application. Recently, I was working on a Drupal project where clients requirement was to add a similar feature to their web application. While doing secondary research our team came across Highstock - a javascript library - that allows you to create general timeline charts and insert them on the website.

Have a look at what exactly is Highstock?

Highstock helps in displaying the stock and timeline charts for web/mobile application based on certain data. Highstock chart offers a wide range of features like basic navigator series, date range, date picker, scrolling bar. Still wondering how to use this feature to its fullest - integrate Drupal 8 Rest API with Highstock javascript library.

Integrating Drupal 8 REST API with Highstock javascript library.

Step 1: Create a custom module. In my case, I will be creating a module name Highstock.

Step 2: Create a highstock.info.yml file.

Step 3: Create highstock.libraries.yml file to add highstock library.

Step 4: Create Rest API Resource, which provides the input for the chart.

Highstock accept the input in the following format: It requires the array structure, within that add x-axis, y-axis data with comma separated. So while creating REST API we need to generate the output in the following format.

[
[1297987200000,204011724],
[1298332800000,218135561],
[1298419200000,167962942],
[1298505600000,124974514],
[1298592000000,95004483],
[1298851200000,100768479]
]

Step 4.1: In Drupal 8, create HighstockChart.php file inside /src/Plugin/rest/resource.

Step 5: Create a highstock_chart.js file to integrate REST API output with a highstock library.

Highstock library provides various types of charts like single line series, line with marker and shadow, spline, step line, area spline etc. You can find the types of charts here https://www.highcharts.com/stock/demo.

In js, we have to call the API which gives you the JSON output. Based on type added for the chart output will be shown.

Step 6: Create a block HighstockChartBlock.php to show the chart.

Place the above block in any of your desired regions and it will display the chart like below:

Highstock Chart

Default JS provides the following properties:

  • Range selector

Range selector
  • Data range

Date range

​​​​​​

  • Scrollbar at the bottom.

Scrollbar
  • Right side menu icon will provide an option to Print chart, download chart in PNG, JPEG, SVG, and PDF format.

Menu bar
  • Mouse Hovering will give Marker with x-axis and y-axis highlighted.

Mouse hover chart

 

Properties of Highstock Chart:

Highstock javascript library provides several properties and methods to configure the chart. All these configurations can be found in highstock API reference. https://api.highcharts.com/highstock/.

We can modify charts using those properties. I have referred to the above link and configured my charts as mentioned below:

We must add the above properties in highstock_chart.js of your custom module. After applying all properties chart will look similar to the below image.

Final Chart

 

This API is very handy when it comes to representing Complex Data structures to end users in the form of colorful charts. You should definitely pitch this one to the clients if they are still showing Data in traditional Tables, Excel sheet, etc. Hope now you can easily integrate Drupal 8 Rest API with Highstock. If you have any suggestions or queries please comment down let me try to answer.

Understanding async-await in Javascript

Async and Await are extensions of promises. So if you are not clear about the basics of promises please get comfortable with promises before reading further. You can read my post on Understanding Promises in Javascript.

I am sure that many of you would be using async and await already. But I think it deserves a little more attention. Here is a small test : If you can’t spot the problem with the below code then read on.

for (name of ["nkgokul", "BrendanEich", "gaearon"]) {
    userDetails = await fetch("https://api.github.com/users/" + name);
    userDetailsJSON = await userDetails.json();
    console.log("userDetailsJSON", userDetailsJSON);
  }

We will revisit the above code block later, once we have gone through async await basics. Like always Mozilla docs is your friend. Especially checkout the definitions.

async and await

From MDN

An asynchronous function is a function which operates asynchronously via the event loop, using an implicit Promise to return its result. But the syntax and structure of your code using async functions is much more like using standard synchronous functions.

I wonder who writes these descriptions. They are so concise and well articulated. To break it down.

  1. The function operates asynchronously via event loop.
  2. It uses an implicit Promise to return the result.
  3. The syntax and structure of the code is similar to writing synchronous functions.

And MDN goes on to say

An async function can contain an await expression that pauses the execution of the async function and waits for the passed Promise's resolution, and then resumes the async function's execution and returns the resolved value. Remember, the await keyword is only valid inside async functions.

Let us jump into code to understand this better. We will reuse the three function we used for understanding promises here as well.

A function that returns a promise which resolves or rejects after n number of seconds.

var promiseTRRARNOSG = (promiseThatResolvesRandomlyAfterRandomNumnberOfSecondsGenerator = function() {
  return new Promise(function(resolve, reject) {
    let randomNumberOfSeconds = getRandomNumber(2, 10);
    setTimeout(function() {
      let randomiseResolving = getRandomNumber(1, 10);
      if (randomiseResolving > 5) {
        resolve({
          randomNumberOfSeconds: randomNumberOfSeconds,
          randomiseResolving: randomiseResolving
        });
      } else {
        reject({
          randomNumberOfSeconds: randomNumberOfSeconds,
          randomiseResolving: randomiseResolving
        });
      }
    }, randomNumberOfSeconds * 1000);
  });
});

Two more deterministic functions one which resolve after n seconds and another which rejects after n seconds.

var promiseTRSANSG = (promiseThatResolvesAfterNSecondsGenerator = function(
  n = 0
) {
  return new Promise(function(resolve, reject) {
    setTimeout(function() {
      resolve({
        resolvedAfterNSeconds: n
      });
    }, n * 1000);
  });
});
var promiseTRJANSG = (promiseThatRejectsAfterNSecondsGenerator = function(
  n = 0
) {
  return new Promise(function(resolve, reject) {
    setTimeout(function() {
      reject({
        rejectedAfterNSeconds: n
      });
    }, n * 1000);
  });
});

Since all these three functions are returning promises we can also call these functions as asynchronous functions. See we wrote asyn functions even before knowing about them.

If we had to use the function promiseTRSANSG using standard format of promises we would have written something like this.

var promise1 = promiseTRSANSG(3);
promise1.then(function(result) {
  console.log(result);
});
promise1.catch(function(reason) {
  console.log(reason);
});

There is a lot of unnecessary code here like anonymous function just for assigning the handlers. What async await does is it improves the syntax for this which would make seem more like synchronous code. If we had to the same as above in async await format it would be like

result = await promiseTRSANSG(3);
console.log(result);

Well that look much more readable than the standard promise syntax. When we used await the execution of the code was blocked. That is the reason that you had the value of the promise resolution in the variable result. As you can make out from the above code sample, instead of the .then part the result is assigned to the variable directly when you use await You can also make out that the .catch part is not present here. That is because that is handled using try catch error handling. So instead of using promiseTRSANSlet us use promiseTRRARNOSG Since this function can either resolve or reject we need to handle both the scenarios. In the above code we just wrote two lines to give you an easy comparison between the standard format and async await format. The example in next section gives you a better idea of the format and structure.

General syntax of using async await

async function testAsync() {
  for (var i = 0; i < 5; i++) {
    try {
      result1 = await promiseTRRARNOSG();
      console.log("Result 1 ", result1);
      result2 = await promiseTRRARNOSG();
      console.log("Result 2 ", result2);
    } catch (e) {
      console.log("Error", e);
    } finally {
      console.log("This is done");
    }
  }
}
test();

From the above code example you can make out that instead of using the promise specific error handling we are using the more generic approach of using try catch for error handling. So that is one thing less for us to remember and it also improves the overall readability even after considering the try catch block around our code. So based on the level of error handling you need you can add any number of catch blocks and make the error messages more specific and meaningful.

Pitfalls of using async and await

async await makes it much more easier to use promises. Developers from synchronous programming background will feel at home while using asyncand await. This should also alert us, as this also means that we are moving towards a more synchronous approach if we don’t keep a watch.

The whole point of javascript/nodejs is to think asynchronous by default and not an after though. async await generally means you are doing things in sequential way. So make a conscious decision whenever you want to use to async await.

Now let us start analysing the code that I flashed at your face in the beginning.

for (name of ["nkgokul", "BrendanEich", "gaearon"]) {
    userDetails = await fetch("https://api.github.com/users/" + name);
    userDetailsJSON = await userDetails.json();
    console.log("userDetailsJSON", userDetailsJSON);
  }

This seems like a harmless piece of code that fetches the github details of three users “nkgokul”, “BrendanEich”, “gaearon” Right. That is true. That is what this function does. But it also has some unintended consequences.

Before diving further into the code let us build a simple timer.

startTime = performance.now();  //Run at the beginning of the code
function executingAt() {
  return (performance.now() - startTime) / 1000;
}

Now we can use executingAt wherever we want to print the number of seconds that have surpassed since the beginning.

async function fetchUserDetailsWithStats() {
  i = 0;
  for (name of ["nkgokul", "BrendanEich", "gaearon"]) {
    i++;
    console.log("Starting API call " + i + " at " + executingAt());
    userDetails = await fetch("https://api.github.com/users/" + name);
    userDetailsJSON = await userDetails.json();
    console.log("Finished API call " + i + "at " + executingAt());
    console.log("userDetailsJSON", userDetailsJSON);
  }
}

Checkout the output of the same.

async-await analysed

As you can find from the output, each of the await function is called after the previous function was completed. We are trying to fetch the details of three different users“nkgokul”, “BrendanEich”, “gaearon” It is pretty obvious that output of one API call is in noway dependent on the output of the others.

The only dependence we have is these two lines of code.

userDetails = await fetch("https://api.github.com/users/" + name);
userDetailsJSON = await userDetails.json();

We can create the userDetailsJSON object only after getting the userDetails. Hence it makes sense to use await here that is within the scope of getting the details of a single user. So let us make an async for getting the details of the single user.

async function fetchSingleUsersDetailsWithStats(name) {
  console.log("Starting API call for " + name + " at " + executingAt());
  userDetails = await fetch("https://api.github.com/users/" + name);
  userDetailsJSON = await userDetails.json();
  console.log("Finished API call for " + name + " at " + executingAt());
  return userDetailsJSON;
}

Now that the fetchSingleUsersDetailsWithStats is async we can use this function to fetch the details of the different users in parallel.

async function fetchAllUsersDetailsParallelyWithStats() {
  let singleUsersDetailsPromises = [];
  for (name of ["nkgokul", "BrendanEich", "gaearon"]) {
    let promise = fetchSingleUsersDetailsWithStats(name);
    console.log(
      "Created Promise for API call of " + name + " at " + executingAt()
    );
    singleUsersDetailsPromises.push(promise);
  }
  console.log("Finished adding all promises at " + executingAt());
  let allUsersDetails = await Promise.all(singleUsersDetailsPromises);
  console.log("Got the results for all promises at " + executingAt());
  console.log(allUsersDetails);
}

When you want to run things in parallel, the thumb rule that I follow is

Create a promise for each async call. Add all the promises to an array. Then pass the promises array to Promise.all This in turn returns a single promise for which we can use await

When we put all of this together we get

startTime = performance.now();
async function fetchAllUsersDetailsParallelyWithStats() {
  let singleUsersDetailsPromises = [];
  for (name of ["nkgokul", "BrendanEich", "gaearon"]) {
    let promise = fetchSingleUsersDetailsWithStats(name);
    console.log(
      "Created Promise for API call of " + name + " at " + executingAt()
    );
    singleUsersDetailsPromises.push(promise);
  }
  console.log("Finished adding all promises at " + executingAt());
  let allUsersDetails = await Promise.all(singleUsersDetailsPromises);
  console.log("Got the results for all promises at " + executingAt());
  console.log(allUsersDetails);
}
async function fetchSingleUsersDetailsWithStats(name) {
  console.log("Starting API call for " + name + " at " + executingAt());
  userDetails = await fetch("https://api.github.com/users/" + name);
  userDetailsJSON = await userDetails.json();
  console.log("Finished API call for " + name + " at " + executingAt());
  return userDetailsJSON;
}
fetchAllUsersDetailsParallelyWithStats();

The output for this is

Promises run in parallel with timestamps

As you can make out from the output, promise creations are almost instantaneous whereas API calls take some time. We need to stress this as time taken for promises creation and processing is trivial when compared to IO operations. So while choosing a promise library it makes more sense to choose a library that is feature rich and has better dev experience. Since we are using Promise.all all the API calls were run in parallel. Each API is taking almost 0.88 seconds. But since they are called in parallel we were able to get the results of all API calls in 0.89 seconds.

In most of the scenarios understanding this much should serve us well. You can skip to Thumb Rules section. But if you want to dig deeper read on.

Digging deeper into await

For this let us pretty much limit ourselves to promiseTRSANSG function. The outcome of this function is more deterministic and will help us identify the differences.

Sequential Execution

startTime = performance.now();
var sequential = async function() {
  console.log(executingAt());
  const resolveAfter3seconds = await promiseTRSANSG(3);
  console.log("resolveAfter3seconds", resolveAfter3seconds);
  console.log(executingAt());
  const resolveAfter4seconds = await promiseTRSANSG(4);
  console.log("resolveAfter4seconds", resolveAfter4seconds);
  end = executingAt();
  console.log(end);
}
sequential();

Sequential Execution

Parallel Execution using Promise.all

var parallel = async function() {
  startTime = performance.now();
  promisesArray = [];
  console.log(executingAt());
  promisesArray.push(promiseTRSANSG(3));
  promisesArray.push(promiseTRSANSG(4));
  result = await Promise.all(promisesArray);
  console.log(result);
  console.log(executingAt());
}
parallel();

Parallel execution using promises

Concurrent Start of Execution

asynchronous execution starts as soon as the promise is created. await just blocks the code within the async function until the promise is resolved. Let us create a function which will help us clearly understand this.

var concurrent = async function() {
  startTime = performance.now();
  const resolveAfter3seconds = promiseTRSANSG(3);
  console.log("Promise for resolveAfter3seconds created at ", executingAt());
  const resolveAfter4seconds = promiseTRSANSG(4);
  console.log("Promise for resolveAfter4seconds created at ", executingAt());
resolveAfter3seconds.then(function(){
    console.log("resolveAfter3seconds resolved at ", executingAt());
  });
resolveAfter4seconds.then(function(){
    console.log("resolveAfter4seconds resolved at ", executingAt());
  });
  console.log(await resolveAfter4seconds);
  console.log("await resolveAfter4seconds executed at ", executingAt());
  console.log(await resolveAfter3seconds); 
  console.log("await resolveAfter3seconds executed at ", executingAt());
};
concurrent();

Concurrent start and then await

From previous post we know that .then is even driven. That is .then is executed as soon as the promise is resolved. So let us use resolveAfter3seconds.thenand resolveAfter4seconds.then to identify when our promises are actually resolved. From the output we can see that resolveAfter3seconds is resolved after 3 seconds and resolveAfter4secondsis executed after 4 seconds. This is as expected.

Now to check how await affects the execution of code we have used

console.log(await resolveAfter4seconds);
console.log(await resolveAfter3seconds);

As we have seen from the output of .then resolveAfter3seconds resolved one second before resolveAfter4seconds . But we have the await for resolveAfter4seconds and then followed by await for resolveAfter3seconds

From the output we can see that though resolveAfter3seconds was already resolved it got printed only after the output of console.log(await resolveAfter4seconds); was printed. Which reiterates what we had said earlier. await only blocks the execution of next lines of code in asyncfunction and doesn’t affect the promise execution.

Disclaimer

MDN documentation mentions that Promise.all is still serial and using .then is truly parallel. I have not been able to understand the difference and would love to hear back if anybody has figured out their heard around the difference.

Thumb Rules

Here are a list of thumb rules I use to keep my head sane around using asyncand await

  1. aync functions returns a promise.
  2. async functions use an implicit Promise to return its result. Even if you don’t return a promise explicitly async function makes sure that your code is passed through a promise.
  3. await blocks the code execution within the async function, of which it(await statement) is a part.
  4. There can be multiple await statements within a single async function.
  5. When using async await make sure to use try catch for error handling.
  6. If your code contains blocking code it is better to make it an asyncfunction. By doing this you are making sure that somebody else can use your function asynchronously.
  7. By making async functions out of blocking code, you are enabling the user who will call your function to decide on the level of asynhronicity he wants.
  8. Be extra careful when using await within loops and iterators. You might fall into the trap of writing sequentially executing code when it could have been easily done in parallel.
  9. await is always for a single promise. If you want to await multiple promises(Run this promises in parallel) create an array of promises and then pass it to the Promise.all function.
  10. Promise creation starts the execution of asynchronous functionality.
  11. await only blocks the code execution within the async function. It only makes sure that next line is executed when the promise resolves. So if an asynchronous activity has already started then await will not have an effect on it.

Please point out if I am missing something here or if something can be improved.

Originally published on https://hackernoon.com/understanding-async-await-in-javascript-1d81bb07…

6 Best Practices To Safeguard Your Drupal 8 Website

The last few months have been quite challenging for media & publishing enterprises dealing with EU’s new data privacy law - GDPR and Drupal highly critical vulnerability - DrupalGeddon 2.  

On 28 March, Drupal announced the alerts about DrupalGeddon 2 (SA-CORE-2018-002 / CVE-2018-7600) - which was later patched by the security team. The vulnerability was potential enough to affect the vast majority of Drupal 6, 7 and 8 websites. 

Earlier in October 2014, Drupal faced similar vulnerability - tagged as DrupalGeddon. At that time, the security patch was released within seven hours of the critical security update. 

So here the question is - how vulnerable is Drupal?

Just like any other major framework out there, there exists security danger on Drupal as well. However, Drupal is a more secure platform when compared to its peers. Learn more about “safety concerns in an e-commerce site and how Drupal is addressing it”.

In short, we can’t specify exactly how vulnerable is Drupal as it entirely depends on the context. Possibly, you will find the answer to this question in one of our previous post where we talked about “Drupal Security Advisor Data”.

Implement these measures to secure your Drupal website

1. Upgrade to the latest version of Drupal

Whether it is your operating system, antivirus or Drupal itself, running the latest version is always suggested. And this is the least you can and should do to protect your website. 

The updates not only bring new features but also enhances security. Further, you should also keep updating modules as it is most often the cause of misery. It's always recommended to check for the update report and keep updating at regular interval. The latest version is Drupal 8.3.1.

Note that it is older versions of CMS that hackers usually target as they are more vulnerable.

2. Remove unnecessary modules

Agreed that the modules play a critical role in enhancing user experience. However, you should be wary of downloading as it increases vulnerability. Also, ensure that the module has a sizable number of downloads. 

In case even if some vulnerability occurs, it will be resolved quickly by the community as it can affect a major chunk of companies/individuals. Furthermore, you can remove/uncheck the unused modules or completely uninstall it.

3. Practice strong user management

In a typical organization, several individuals require access to the website to manage different areas within it. These users are potential enough to be a reason for security breach so it is important to keep control of their permissions. 

Give limited access to the site, instead of giving access to the whole site by default. And when the user leaves the organization they should be promptly removed from the administrator list to eliminate any unnecessary risk. Read on for a quick review “managing user roles & permission in Drupal 8”.

4. Choose a proper hosting provider

It's always a dilemma to figure out - which hosting provider should we trust for our website? Not to mention hosting provider plays a key role in ensuring the security of the website. Look for a hosting provider, which offers a security-first Drupal hosting solution with all the server side security measure like SSL.

5. Enable HTTPS

As a core member of the development team/business owner/decision makers, it's your responsibility to take the ownership of the security of your enterprise website. 

Consider performing a check for common vulnerabilities at regular interval as it will allow you to make quick work of those holes by following the prompts. Here is what Drupal experts have to say about "securing users private data from unauthorized access".

6. Backup regularly

Plan for the worst. Keep your codebase and database handy. There can be a number of reasons, both accidental and intentional, that can destroy your hard work. Here is the list of reasons why you should regularly backup your website. 

  • General safety
  • The original version of your site has aged
  • Respond quickly if your site is hacked
  • Updates went wrong 

To sum up, you need to follow the above-mentioned steps in order to secure your Drupal website. Also, reporting a security breach to the Drupal community can be an effective way to patch the issue and seek help from the community to avoid massive risk.

Now go ahead and secure your Drupal website!
 

Killing twitter with cryptocurrency

Well the title was a hyperbole. Now that I have got your attention let us get started. It might be stretch as of today that we can kill twitter. But in this post I would like to show that it many not be impossible after all, at-least in a couple of years.

A few things to know before we start killing twitter.

It starts with realising the you are doing a favour to twitter and twitter is not doing a favour to you. Yes I agree that twitter has been a great tool and it even led to many Arab Spring. Checkout Social Media Made the Arab Spring, But Couldn't Save It for further details.

But we need to realise that while these are the pleasant side-effects of twitter/social media, for a service or business to be sustainable it has to be profitable or at-least should have the profit generating potential in the future. Irrespective of whether the services is following a ad revenue based model or freemium model one thing is in common. Either you have to pay up for the services or the service needs to sell something to somebody.

Understanding what is that something that is sold and to whom it is sold is important.

Let us start with the most quoted quote regarding the free services or seemingly free services.

You are the product

Most of the social media users forget the value they are adding to the networks. It is easier for us to see a blog post or a video as as data/content. But we fail to realise that even the short status updates that we do on and our comments on them in social media websites are also content.

Every action that we do on social media is valuable and it adds to the valuation of the platform. How much is that action valued and how is it valued requires a detailed analysis (Will be following up this post with couple of related posts about this topic). But for now let us understand this much.

Every action that we do on a social media website falls into one of the following categories.

  1. Content creation
  2. Content curation
  3. Content Distribution
  4. Training the AI models.

I have tried to highlight the same in this tweet of mine.

social media misconception

It is difficult for people to understand this as they cannot see it clearly or rather there is no way for them to understand this. It only becomes clear in some conversations like the following. In February this year when aantonop was complaining about how facebook was locking him out, one of the users mentioned this.

is-social-media-doing-us-a-favor

Anton’s reply was interesting.

you are doing socia media a favor

So it brings us to the question who is benefitting from whom. Is the platform benefitting from the user or is the user benefitting from the platform. At the least it is a synergy between the platform and user. At worst the platform is ripping of your data and making a hell lot of money while not rewarding you in anyway.

What is your data worth?

Data and the value it creates is has different lifetimes and there are lot of overlaps. So it is difficult to put a value to it. Let use a very crude way to identify the average minimum value of our data on Facebook. Facebook is valued at 600 Billion USD today. There are around 2 billion users on Facebook. Since Facebook makes money primarily by showing ads or/and selling your data :P , data created by each user should be worth at-least 300 USD.

How Much Is >Your< Data Worth? At Least $240 per Year. Likely Much More.
This is the first article in a series of posts that addresses the value of personal data.medium.com

One thing that everybody seems to agree is that data is the new oil and it is valuable. But what most of us fail to understand is that oil has a single lifecycle but whereas data has multiple life-cycles. So any valuation you put to a data piece is only a moving value that is affected by various parameters. We also need to realise that data that we consider archived or stale also revenue generating potential in the future. AI models will need a lot of data going forward and will unlock the revenue generating potential of your data. In the following article you can checkout how Bottos and Databroker DAO or unlocking the potential of data from various sources.

Data Monetization: 2 untapped ways to monetize your data, no matter what size it is
Data is everything and everything is data. In this data-driven world, the absence of data would lead to a collapse of…www.bitfolio.org

The two ways to realise the true value of your data

There are two ways you will realise that your data is worth something.

One : Have somebody like Zuck sell your data and make billions in the process.

Two : look at the real money people make with data.

1. When your data is sold

Cambridge Analytics expose happened on March 17, 2018. This expose made it clear that the user targeting is not just for ads and can be used for much more. There were serious concerns about users’ privacy. The expose once again proved that privacy is dead. What is more disturbing is that experts expressed that this might have a serious effect on Facebook’s future valuations. But that turned out to be completely false. Can you spot the dip in Facebook marketcap because of this scandal? I have highlighted this in red circle for you towards the right end of the graph. This is what I would call “A major dip in the short term but a minor blip in the long term”. The quick correction back to the trend line only suggests that nobody takes privacy seriously any more.

Facebook Marketcap

2. When you look at real money people make with your data

I am sure that Andreas M. Antonopoulos knows the value of data. I am just taking this example as it was a high profile case where data created elsewhere was able to generate revenues in some other platform because of the data distribution. The interesting thing is that in this case the money made was being used for translating aantonop’s videos to other language. You can read more about it here.

The real aanntonop

Aantonop made the above post which can be called as “Proof of Identity” post verifying that he is the real aantonop. The post gathered a lot of attention and has rewards of 1449 USD. I just hope that Aantonop claims the amount one day and starts using Steem more frequently.

I took Aantonop’s example because he is very popular in the world of Bitcoin and his videos have helped many entrepreneurs to take a plunge into Bitcoin. His videos are proof that well made content has a long shelf life and has revenue generating potential even outside the platforms that the content was created in.

Now lets gets back to our original question.

How to kill twitter?

This might seem like an impossible proposition to many. Let us look at the reasons why it is difficult to kill twitter or facebook for that matter.

I don’t need another social network.

I first got to know about Robert Scoble from Google+ days. I invited him to checkout Steemit platform and he replied with “I don’t need another social network.” Today we are in age where we have a social media overload. The new social media platforms needs to cross the critical mass for all the others to follow up. Replacing Facebook might be impossible for the next few years but we might have a chance to replace twitter with a decentralised version. Facebook has too much of a lead. It has your photos, videos, friends, memories, groups and pages. Any new entrant needs to address all these to overcome Facebook. Whereas with respect to twitter a limited feature set with additional benefits should be able to sway the needle in the new entrant’s favour.

So for now let us assume given enough motivation users might consider shifting to the new platform.

Twitter has first mover advantage

Twitter is huge. Twitter has first mover advantage. Yes that might be the case. But last year has proven the with right incentive models you can have a jumpstart. Binance became the fastest unicorn in history.

From Zero To Crypto Billionaire In Under A Year: Meet The Founder Of Binance
Changpeng " CZ" Zhao CEO, Binance Crypto Net Worth: $1.1 billion-$2 billion* Seven months ago Binance didn't exist…www.forbes.com

Also Binance was more profitable than Germany’s biggest bank.

Crypto Exchange Binance is More Profitable than Germany's Biggest Bank
This year, in the first quarter of 2018, Deutsche Bank, Germany's biggest bank and one of Europe's leading financial…www.ccn.com

So don’t be surprised if a new entrant replaces bitter in less than a year.

Show me the money

Attributing a value to content is a tough task. There have many unsuccessful attempts in the past. I think Steem blockchain has come further than any other attempts. By incentivising both content creation and content curation steem has figured out a subjective way to attribute value to content. With the release of SMTs later this year the community will only get better at arriving at closer estimations for the value of posts. When people were told that their content is worth something they were not able to relate to it. With platforms like Steem having put definitive value to content and having paid the same to the content creators (which many have en-cashed to to FIAT) the idea is more palpable now. Monetary incentives can do wonders and as more people get to know about these platforms the effect will only get compounded.

Hitting the critical mass

To be a serious contender to twitter the new platform needs to hit the critical mass. This can be the real challenge. So here are the things that can be done.

  1. Create a distributed cryptocurrency on the lines of Steem (Especially the rewards mechanism part.) Keep the interface, UX and restrictions(like number of characters) very similar to twitter. So that people feel at home ;)
  2. In addition to the normal account creation have a preserved namespace twitter-[twitter-handle]. This will be reserved for creating one to one mapping of user accounts from twitter to the new blockchain.
  3. The user accounts for each user on twitter are also created on the new platform. Both username and passwords(private keys) will be created. Twitter users can claim their password by sending a tweet to twitter handle of new blockchain. The password or private keys will be dm’ed to users.
  4. Since all tweets are public duplicate them in the new platform under the users accounts. If that is a stretch then it can be started with latest tweets of popular accounts and then it can be expanded slowly.
  5. The beta users will have access to popular content on the new platform. Their retweets and likes of tweets will decide the value of the new tweets mirrors from twitter.
  6. While users might be hesitant to create new accounts I think there will be very few people who will not be happy to claim their accounts. Especially when they know that there are rewards waiting for them to en-cash for the content they have created.
  7. The incentive or the rewards to be received on the new platform will be bigger for the users with huge number of followers. (Assuming that their content is also liked by the beta users on the new platform). So if these influencers move to the new platform, they will also bring along at-least some part of their followers.
  8. Considering that the content on blockchain will be censor resistant and it rewards good content the platform should be able to take of hit the critical mass very soon.

I am not sure what will be the legal issues surrounding an attempt like these. But I think this is something definitely worth trying. A few crypto-millionaires coming together should have enough funds to try something like this. What do you think? Will an attempt like this work? Share your thoughts.

Understanding promises in JavaScript

 
I am making you a pinky promise that by the end of this post you will know JavaScript Promises better.

I have had a kind of “love and hate” relationship with JavaScript. But nevertheless JavaScript was always intriguing for me. Having worked on Java and PHP for the last 10 years, JavaScript seemed very different but intriguing. I did not get to spend enough time on JavaScript and have been trying to make up for it of late.

Promises was the first interesting topic that I came across. Time and again I have heard people saying that Promises saves you from Callback hell. While that might have been a pleasant side-effect, there is more to Promises and here is what I have been able to figure out till now.

Background

When you start working on JavaScript for the first time it can be a little frustrating. You will hear some people say that JavaScript is synchronous programming language while others claim that it is asynchronous. You hear blocking code, non blocking code, event driven design pattern, event life cycle, function stack, event queue, bubbling, polyfill, babel, angular, reactJS, vue JS and a ton of other tools and libraries. Fret not. You are not the first. There is a term for that as well. It is called JavaScript fatigue. You should check out the following article. There is a reason this post got 42k claps on Hackernoon :)

How it feels to learn JavaScript in 2016
No JavaScript frameworks were created during the writing of this article.hackernoon.com

JavaScript is a synchronous programming language. But thanks to callback functions we can make it function like Asynchronous programming language.

Promises for layman

Promises in JavaScript are very similar to a promise in real life. So first let us look at promises in real life first. The definition of a promise from the dictionary is as follows

promise : noun : Assurance that one will do something or that a particular thing will happen.

So what happens when somebody makes you a promise?

  1. A promise gives you an assurance that something will be done. Whether they(who made the promise) will do it themselves or will they get it done by others is immaterial. They give you an assurance based on which you can plan something.
  2. A promise can either be kept or broken.
  3. When a promise is kept you expect something out of that promise which you can make use of for your further actions or plans.
  4. When a promise is broken, you would like to know why the person who made the promise was not able to keep up his side of the bargain. Once you know the reason and have a confirmation that the promise has been broken you can plan what to do next or how to handle it.
  5. At the time of making a promise all we have is only an assurance. We will not be able to act on it immediately. We can decide and formulate what needs to be done when the promise is kept (and hence we have expected outcome) or when the promise is broken (we know the reason and hence we can plan a contingency).
  6. There is a chance that you may not hear back from the person who made the promise at all. In such cases you would prefer to keep a time threshold. Say if the person who made the promise doesn’t come back to me in 10 days I will consider that he had some issues and will not keep up his promise. So even if the person comes back to you after 15 days it doesn’t matter to you any more as you have already made alternate plans.

Promises in JavaScript

As a rule of thumb for JavaScript I always read documentation from MDN Web Docs. Of all the resources I think they provide the most concise details. I read up the Promises page form MDSN Web Docs and played around with code to get a hang of it.

There are two parts to understanding promises. Creation of promises and Handling of promises. Though most of our code will generally cater to handling of promises created by other libraries a complete understanding will help us for sure and understanding of creation of promises is equally important once you cross the beginner stage.

Creation of Promises

Let us look at the signature for creating a new promise.

new Promise( /* executor */ function(resolve, reject) { ... } );

The constructor accepts a function called executor. This executor function accepts two parameters resolve and reject which are in turn functions. Promises are generally used for easier handling of asynchronous operations or blocking code, examples for which being file operations, API calls, DB calls, IO calls etc.The initiation of these asynchronous operations are initiated within the executorfunction. If the asynchronous operations are successful then the expected result is returned by calling the resolvefunction by the creator of the promise. Similarly if there was some unexpected error the reasons is passed on by calling the rejectfunction.

Now that we know how to create a promise. Let us create a simple promise for our understanding sake.

var keepsHisWord;
keepsHisWord = true;
promise1 = new Promise(function(resolve, reject) {
  if (keepsHisWord) {
    resolve("The man likes to keep his word");
  } else {
    reject("The man doesnt want to keep his word");
  }
});
console.log(promise1);
Every promise has a state and value

Since this promise gets resolved right away we will not be able to inspect the initial state of the promise. So let us just create a new promise that will take some time to resolve. The easiest way for that is to use the setTimeOut function.

promise2 = new Promise(function(resolve, reject) {
  setTimeout(function() {
    resolve({
      message: "The man likes to keep his word",
      code: "aManKeepsHisWord"
    });
  }, 10 * 1000);
});
console.log(promise2);

The above code just creates a promise that will resolve unconditionally after 10 seconds. So we can checkout the state of the promise until it is resolved.

state of promise until it is resolved or rejected

Once the ten seconds are over the promise is resolved. Both PromiseStatus and PromiseValue are updated accordingly. As you can see we updated the resolve function so that we can pass a JSON Object instead of a simple string. This is just to show that we can pass other values as well in the resolve function.

A promise that resolves after 10 seconds with a JSON object as returned value

Now let us look at a promise the will reject. Let us just modify the promise 1 a little for this.

keepsHisWord = false;
promise3 = new Promise(function(resolve, reject) {
  if (keepsHisWord) {
    resolve("The man likes to keep his word");
  } else {
    reject("The man doesn't want to keep his word");
  }
});
console.log(promise3);

Since this will create a unhanded rejection chrome browser will show an error. You can ignore it for now. We will get back to that later.

rejections in promises

As we can see PromiseStatus can have three different values. pending resolved or rejected When promise is created PromiseStatuswill be in the pending status and will have PromiseValue as undefined until the promise is either resolved or rejected. When a promise is in resolved or rejected states, a promise is said to be settled. So a promise generally transitions from pending state to settled state.

Now that we know how promises are created we can look at how we can use or handle promises. This will go hand in hand with understanding the Promise object.

Understanding promises Object

As per MDN documentation

The Promise object represents the eventual completion (or failure) of an asynchronous operation, and its resulting value.

Promise object has static methods and prototype methodsStatic methods in Promise object can be applied independently, whereas the prototype methods needs to be applied on the instances of Promise object. Remembering that both normal methods and prototypes all return a Promise makes it much easier to make sense of things.

Prototype Methods

Let us first start with the prototype methods There are three of them. Just to reiterate remember that all these methods can be applied on an instance of Promise object and all these methods return a promise in turn. All the following methods assigns handlers for different state transitions of a promise. As we saw earlier when a Promise is created it is in pending state. One or more of the following three methods will be run when a promise is settled based on whether they are fulfilled or rejected.

Promise.prototype.catch(onRejected)

Promise.prototype.then(onFulfilled, onRejected)

Promise.prototype.finally(onFinally)

The below image shows the flow for .then and .catch methods. Since they return a Promise they can be chained again which is also shown in the image. If .finally is declared for a promise then it will be executed whenever a promise is settled irrespective of whether it is fulfilled or rejected.

From : https://mdn.mozillademos.org/files/15911/promises.png

Here is a small story. You are a school going kid and you ask your mom for a phone. She says “I will buy a phone for this month end.”

Let us look at how it will look in JavaScript if the promise gets executed at the end of the month.

var momsPromise = new Promise(function(resolve, reject) {
  momsSavings = 20000;
  priceOfPhone = 60000;
  if (momsSavings > priceOfPhone) {
    resolve({
      brand: "iphone",
      model: "6s"
    });
  } else {
    reject("We donot have enough savings. Let us save some more money.");
  }
});
momsPromise.then(function(value) {
  console.log("Hurray I got this phone as a gift ", JSON.stringify(value));
});
momsPromise.catch(function(reason) {
  console.log("Mom coudn't buy me the phone because ", reason);
});
momsPromise.finally(function() {
  console.log(
    "Irrespecitve of whether my mom can buy me a phone or not, I still love her"
  );
});

The output for this will be.

moms failed promise.

If we change the value of momsSavings to 200000 then mom will be able to gift the son. In such case the output will be

mom keeps her promise.

Let us wear the hat of somebody who consumes this library. We are mocking the output and nature so that we can look at how to use then and catch effectively.

Since .then can assign bothonFulfilled, onRejected handlers , instead of writing separate .then and .catch we could have done the same with with .then It would have looked like below.

momsPromise.then(
  function(value) {
    console.log("Hurray I got this phone as a gift ", JSON.stringify(value));
  },
  function(reason) {
    console.log("Mom coudn't buy me the phone because ", reason);
  }
);

But for readability of the code I think it is better to keep them separate.

To make sure that we can run all these samples in browsers in general or chrome in specific I am making sure that we do not have external dependencies in our code samples. To better understand the further topics let us create a function that will return a promise which will be resolved or rejected randomly so that we can test out various scenarios. To understand the concept of asynchronous functions let us introduce a random delay also into our function. Since we will need random numbers let us first create a random function that will return a random number between x and y.

function getRandomNumber(start = 1, end = 10) {
  //works when both start,end are >=1 and end > start
  return parseInt(Math.random() * end) % (end-start+1) + start;
}

Let us create a function that will return a promise for us. Let us call for our function promiseTRRARNOSG which is an alias for promiseThatResolvesRandomlyAfterRandomNumnberOfSecondsGenerator. This function will create a promise which will resolve or reject after a random number of seconds between 2 and 10. To randomise rejection and resolving we will create a random number between 1 and 10. If the random number generated is greater 5 we will resolve the promise, else we will reject it.

function getRandomNumber(start = 1, end = 10) {
  //works when both start and end are >=1
  return (parseInt(Math.random() * end) % (end - start + 1)) + start;
}
var promiseTRRARNOSG = (promiseThatResolvesRandomlyAfterRandomNumnberOfSecondsGenerator = function() {
  return new Promise(function(resolve, reject) {
    let randomNumberOfSeconds = getRandomNumber(2, 10);
    setTimeout(function() {
      let randomiseResolving = getRandomNumber(1, 10);
      if (randomiseResolving > 5) {
        resolve({
          randomNumberOfSeconds: randomNumberOfSeconds,
          randomiseResolving: randomiseResolving
        });
      } else {
        reject({
          randomNumberOfSeconds: randomNumberOfSeconds,
          randomiseResolving: randomiseResolving
        });
      }
    }, randomNumberOfSeconds * 1000);
  });
});
var testProimse = promiseTRRARNOSG();
testProimse.then(function(value) {
  console.log("Value when promise is resolved : ", value);
});
testProimse.catch(function(reason) {
  console.log("Reason when promise is rejected : ", reason);
});
// Let us loop through and create ten different promises using the function to see some variation. Some will be resolved and some will be rejected. 
for (i=1; i<=10; i++) {
  let promise = promiseTRRARNOSG();
  promise.then(function(value) {
    console.log("Value when promise is resolved : ", value);
  });
  promise.catch(function(reason) {
    console.log("Reason when promise is rejected : ", reason);
  });
}

Refresh the browser page and run the code in console to see the different outputs for resolve and reject scenarios. Going forward we will see how we can create multiple promises and check their outputs without having to do this.

Static Methods

There are four static methods in Promise object.

The first two are helpers methods or shortcuts. They help you create resolved or rejected promises easily.

Promise.reject(reason)

Helps you create a rejected promise.

var promise3 = Promise.reject("Not interested");
promise3.then(function(value){
  console.log("This will not run as it is a resolved promise. The resolved value is ", value);
});
promise3.catch(function(reason){
  console.log("This run as it is a rejected promise. The reason is ", reason);
});

Promise.resolve(value)

Helps you create a resolved promise.

var promise4 = Promise.resolve(1);
promise4.then(function(value){
  console.log("This will run as it is a resovled promise. The resolved value is ", value);
});
promise4.catch(function(reason){
  console.log("This will not run as it is a resolved promise", reason);
});

On a sidenote a promise can have multiple handlers. So you can update the above code to

var promise4 = Promise.resolve(1);
promise4.then(function(value){
  console.log("This will run as it is a resovled promise. The resolved value is ", value);
});
promise4.then(function(value){
  console.log("This will also run as multiple handlers can be added. Printing twice the resolved value which is ", value * 2);
});
promise4.catch(function(reason){
  console.log("This will not run as it is a resolved promise", reason);
});

And the output will look like.

The next two methods helps you process a set of promises. When you are dealing with multiple promises it is better to create an array of promises first and then do the necessary action over the set of promises. For understanding these methods we will not be able to use our handy promiseTRRARNOSG as it is too random. It is better to have some deterministic promises so that we can understand the behaviour. Let us create two functions. One that will resolve after n seconds and one that will reject after n seconds.

var promiseTRSANSG = (promiseThatResolvesAfterNSecondsGenerator = function(
  n = 0
) {
  return new Promise(function(resolve, reject) {
    setTimeout(function() {
      resolve({
        resolvedAfterNSeconds: n
      });
    }, n * 1000);
  });
});
var promiseTRJANSG = (promiseThatRejectsAfterNSecondsGenerator = function(
  n = 0
) {
  return new Promise(function(resolve, reject) {
    setTimeout(function() {
      reject({
        rejectedAfterNSeconds: n
      });
    }, n * 1000);
  });
});

Now let us use these helper functions to understand Promise.All

Promise.All

As per MDN documentation

The Promise.all(iterable) method returns a single Promise that resolves when all of the promises in the iterable argument have resolved or when the iterable argument contains no promises. It rejects with the reason of the first promise that rejects.

Case 1 : When all the promises are resolved. This is the most frequently used scenario.

console.time("Promise.All");
var promisesArray = [];
promisesArray.push(promiseTRSANSG(1));
promisesArray.push(promiseTRSANSG(4));
promisesArray.push(promiseTRSANSG(2));
var handleAllPromises = Promise.all(promisesArray);
handleAllPromises.then(function(values) {
  console.timeEnd("Promise.All");
  console.log("All the promises are resolved", values);
});
handleAllPromises.catch(function(reason) {
  console.log("One of the promises failed with the following reason", reason);
});
All promises resolved.

There are two important observations we need to make in general from the output.

First : The third promise which takes 2 seconds finishes before the second promise which takes 4 seconds. But as you can see in the output, the order of the promises are maintained in the values.

Second : I added a console timer to find out how long Promise.All takes. If the promises were executed in sequential it should have taken 1+4+2=7 seconds in total. But from our timer we saw that it only takes 4 seconds. This is a proof that all the promises were executed in parallel.

Case 2 : When there are no promises. I think this is the least frequently used.

console.time("Promise.All");
var promisesArray = [];
promisesArray.push(1);
promisesArray.push(4);
promisesArray.push(2);
var handleAllPromises = Promise.all(promisesArray);
handleAllPromises.then(function(values) {
  console.timeEnd("Promise.All");
  console.log("All the promises are resolved", values);
});
handleAllPromises.catch(function(reason) {
  console.log("One of the promises failed with the following reason", reason);
});

Since there are no promises in the array the returning promise is resolved.

Case 3 : It rejects with the reason of the first promise that rejects.

console.time("Promise.All");
var promisesArray = [];
promisesArray.push(promiseTRSANSG(1));
promisesArray.push(promiseTRSANSG(5));
promisesArray.push(promiseTRSANSG(3));
promisesArray.push(promiseTRJANSG(2));
promisesArray.push(promiseTRSANSG(4));
var handleAllPromises = Promise.all(promisesArray);
handleAllPromises.then(function(values) {
  console.timeEnd("Promise.All");
  console.log("All the promises are resolved", values);
});
handleAllPromises.catch(function(reason) {
  console.timeEnd("Promise.All");
  console.log("One of the promises failed with the following reason ", reason);
});
Execution stopped after the first rejection

Promise.race

As per MDN documention

The Promise.race(iterable) method returns a promise that resolves or rejects as soon as one of the promises in the iterable resolves or rejects, with the value or reason from that promise.

Case 1 : One of the promises resolves first.

console.time("Promise.race");
var promisesArray = [];
promisesArray.push(promiseTRSANSG(4));
promisesArray.push(promiseTRSANSG(3));
promisesArray.push(promiseTRSANSG(2));
promisesArray.push(promiseTRJANSG(3));
promisesArray.push(promiseTRSANSG(4));
var promisesRace = Promise.race(promisesArray);
promisesRace.then(function(values) {
  console.timeEnd("Promise.race");
  console.log("The fasted promise resolved", values);
});
promisesRace.catch(function(reason) {
  console.timeEnd("Promise.race");
  console.log("The fastest promise rejected with the following reason ", reason);
});
fastest resolution

All the promises are run in parallel. The third promise resolves in 2 seconds. As soon as this is done the promise returned by Promise.race is resolved.

Case 2: One of the promises rejects first.

console.time("Promise.race");
var promisesArray = [];
promisesArray.push(promiseTRSANSG(4));
promisesArray.push(promiseTRSANSG(6));
promisesArray.push(promiseTRSANSG(5));
promisesArray.push(promiseTRJANSG(3));
promisesArray.push(promiseTRSANSG(4));
var promisesRace = Promise.race(promisesArray);
promisesRace.then(function(values) {
  console.timeEnd("Promise.race");
  console.log("The fasted promise resolved", values);
});
promisesRace.catch(function(reason) {
  console.timeEnd("Promise.race");
  console.log("The fastest promise rejected with the following reason ", reason);
});
fastest rejection

All the promises are run in parallel. The fourth promise rejected in 3 seconds. As soon as this is done the promise returned by Promise.race is rejected.

I have written all the example methods so that I can test out various scenarios and tests can be run in the browser itself. That is the reason you don’t see any API calls, file operations or database calls in the examples. While all of these are real life example you need additional effort to set them up and test it. Whereas using the delay functions gives you similar scenarios without the burden of additional setup. You can easily play around with the values to see and checkout different scenarios. You can use the combination of promiseTRJANSG, promiseTRSANSG and promiseTRRARNOSG methods to simulate enough scenarios for a thorough understanding of promises. Also use of console.time methods before and after relevant blocks will help us identify easily if the promises are run parallelly or sequentially . Let me know if you have any other interesting scenarios or if I have missed something. If you want all the code samples in a single place check out this gist.

 

Bluebird has some interesting features like

  1. Promise.prototype.timeout
  2. Promise.some
  3. Promise.promisify

We will discuss these in a separate post.

I will also be writing one more post about my learnings from async and await.

Before closing I would like to list down all the thumb rules I follow to keep my head sane around promises.

  1. Use promises whenever you are using async or blocking code.
  2. resolve maps to then and reject maps to catch for all practical purposes.
  3. Make sure to write both .catch and .then methods for all the promises.
  4. If something needs to be done in both the cases use .finally
  5. We only get one shot at mutating each promise.
  6. We can add multiple handlers to a single promise.
  7. The return type of all the methods in Promise object whether they are static methods or prototype methods is again a Promise
  8. In Promise.all the order of the promises are maintained in values variable irrespective of which promise was first resolved.

Originally published on https://hackernoon.com/understanding-promises-in-javascript-13d99df067c1

Sample data from Bitcoin Dominance Chart on Coin Market Cap.

CoinMarketCap(CoinMarketCap) has some global charts which help you get insights into the overall cryptocurrency markets. You can find them on https://coinmarketcap.com/charts/. I was particularly interested in the dominance chart as I was trying to analyze how Bitcoin and Altcoin dominance affects markets and what role it played in important dates in the last year.

Recently when I was trying to do sampling from the above Graph for an article about “Bitcoin Dominance and the Rise of Others” -https://medium.com/@gokulnk/bitcoin-dominance-and-the-emergence-of-others-64a7996272ad it was taking a lot of time to get data and it was really irritating. I had to mouseover on the graph, copy the data manually into the Medium article I was writing. I was using MAC split-screen for the same and it was not easy to switch the focus between the split screens. It only added to the frustration. Comment if you know how to do it.

So I set out to write a small script to fetch the data. Though the script took a little longer than I expected, I think I will save a lot of time going forward whenever I want to do sampling. I am putting out the script so that others also can use it.


dominanceChart = Highcharts.charts[2];
coinsForDominance = ['Bitcoin','Ethereum','Bitcoin Cash', 'Ripple', 'Others'];
datesForDominance = ['Jan 25 2017', 'March 10 2017', 'March 26 2017', 'May 18 2017', 'June 14 2017', 'August 03 2017', 'November 12 2017', 'Dec 8 2017', 'Jan 14 2018', 'May 3 2018'];
coinDateDominanceMatrix = [];
coinsForDominanceData = dominanceChart.series.filter(coin => {return coinsForDominance.indexOf(coin.name) != -1});
csvString = "Date, " + coinsForDominance.join();
firstDateTime = coinsForDominanceData[0].xData[0];
date = new Date(1485242220000);
date.setHours(0);
date.setMinutes(0);
date.setSeconds(0);
date.setMilliseconds(0);
difference = firstDateTime - date.getTime();

datesForDominance.forEach(function(relevantDate){
    csvString+= "\n" + relevantDate + ", ";
    coinDateDominanceMatrix[relevantDate] = [];
    coinsForDominanceData.forEach(function(coin){
        timestring = Date.parse(relevantDate)+difference;
        coinDominance = coin.yData[coin.xData.indexOf(timestring)];
        var dominanceValue = Math.round(coinDominance * 100) / 100; 
        coinDateDominanceMatrix[relevantDate][coin.name]= dominanceValue;
        csvString+=dominanceValue + ", ";
    });
});
console.table(coinDateDominanceMatrix);
console.log(csvString);

 

Just visit the page https://coinmarketcap.com/charts/ and copy-paste the following code into the console to get the relevant data. You can also edit coinsForDominanceand datesForDominancevariables to get the data that you need.

Let me know if it helped you.

Originally published on https://hackernoon.com/sample-data-from-bitcoin-dominance-chart-on-coin…

Integrating Headless Drupal with AngularJS

This post is the last part of AngularJS series where we have discussed all of the essential concepts and knowledge you need to get started. The series covers a wide range of topic, including an Intro to AngularJS, Data binding methods, Modules & Controller, Filter, Custom Directives and Routing.

As a Drupal developer, you must have heard the phrase “Headless Drupal”, wondering what exactly it is. And how it is different than standard Drupal. No worries! We will take a brief look at the various facets of Headless Drupal and how to implement Rest API in Drupal programmatically as well as through view method. We will also explore how to integrate Drupal with AngularJS. Let's try to understand. 

In short, Headless Drupal is nothing but a front-end framework decoupled from the backend that stores the data. Here, the front-end is responsible for what to display and requests data from Drupal as needed. In this, users interact with front-end framework rather than backend CMS. Further, instead of splitting the HTML, Drupal provides the data in JSON format to the front-end framework like AngularJS or embed js or react.js etc.

Cutting a long story short, how headless web works?

First, let’s see the flow of headless Drupal and how to integrate front-end framework. 

Headless Durpal flow and its integratioin
  • Static web:  Static Html page directly interacts with the browser and not with backend framework
  • CMS web:  Here, DB content and PHP logic interact with the browser.   
  • Headless web: Front-end framework plays a crucial role between php logic and browser. Here we use API to fetch the data from CMS to write logic which is shown in the browser.

Implementing Rest API in Drupal

In order to display data in front-end framework, we need to create a REST API plugin that will assist to fetch the data from Drupal.

Notably, there are two ways to  create rest API plugin in Drupal 8:

  • Programmatically
  • Views 

Method 1:  Programmatically

Step 1. Create custom module using Drupal console 

             Command: drupal generate:module

Step 2. Now generate rest API plugin with the help of Drupal console

            Command: drupal generate:plugin:rest:resource

After creating a Rest API programmatically, you can see a folder structure similar to the below one.

Rest API folder structure


          

Step 4. Move to path: admin/config/services/rest

Step 5. Enable and edit the configurations like method (get, post), format like json, xml , basic auth which Rest API we have created.

Step 6. We can access the API URL
       
          Url format:  /vbrest?_format=json
 
Note: Make sure we append query parameter ?_format=json 

Now use the tool like postman to test whether the data is rendered or not.      

Method 2: Using views

Step 1. Move to path: admin/structure/views

Step 2. Create a new view and make sure that we have enabled the checkbox to export the view as Rest API and specify the URL.

Rest API export setting


Step 3. After creating the View, define the configuration for the various formats like json, hal_json ,xml etc and which fields are required to be generated in API.

View format


Step 4. The view is created successfully. Access the API by its URL using postman to get the result.

Now we are ready with data which is generated from the Drupal. Here, we can see how to fetch this data through Rest API.

Integrating Drupal with AngularJS:

As we all know AngularJs is an open-source front-end framework that helps to develop single page applications, dynamic web apps etc.

Follow the below steps to develop a web page using Angular:

  • Create a folder(angularrest) inside the Drupal(d8) docroot.
  • Now we can create a file say like . index.html.
  • We can write our logic to fetch the data and to display it.
  • Now we can the see the output, by accessing the 
                  url: localhost/d8/angularrest/index.html


Sample output:  

Drupal with AngularJS output

That’s it! Now you know how to integrate headless Drupal with AngularJS. Go ahead try it on your web application and see how it works for you. Here I have briefed headless Drupal, implementation of Rest API in Drupal, how to create Rest API programmatically and using View method. And finally, integrating Drupal with AngularJS.

Below given is the presentation on "Integrating Headless Drupal with AngularJS".

How to highlight search results in Search API Solr View in Drupal 8

In Search API, there is a field for a search excerpt that you can use on field views to highlight search results. In this article, I’m going to show you how to enable excerpt and set it using views. Here I’m assuming that you have already set the Search API module and has a Search API Solr view.

Follow the steps:

Go to Manage -> Configuration -> Search and Metadata -> Search API.

In your search API index ‘processors’ tab, enable Highlight Processor as shown below.

search API index ‘processors’ tab

In the processor settings tab, check “Create excerpt” field. You can also set the fields to generate excerpt.

Create excerpt field

Save the configurations and re-index all the data so that the added configuration will take effect.

Edit your Search API Solr view. You can display highlighted results only if your view is displaying fields. However, if you need to build a custom view based search_api search that renders entities instead of using fields, the excerpt info stays hidden in the view result array.

custom view based search_api

 

Click on Add fields and select the excerpt field

Create excerpt field

You can add other fields along with Excerpt field as per your requirements. Save the view and check the search results, You will be able to see highlighted output!! 

Hope now you know how to highlight search results in Search API Solr View for Drupal 8 website. If you have any suggestions or queries please comment below let me try to answer.

An overview of Routing in AngularJS

So far we have gone through a series of AngularJS components, such as Data binding methods, Modules & Controller, Filter and Custom directives In this blog, we will discuss Routings techniques that will be followed by other components, such as Custom Directives, Scope, Services and others. So let’s talk about Routing. As the name defines, Routing means path. It allows developers to use Views & Controllers based on path match. It’s simple and straightforward. Check out how?

  1. Look for the path (hash path) i.e. search for the Triggered path 
  2. Get the content from that path i.e. from View/HTML
  3. Return response back to the View by injecting into HTML or by manipulating DOM.

Routing plays a critical role when we make configurations for the custom application to render specific content or get the content from particular URL based on the path matching. Further, it is helpful when you build SPA (Single Page Application) - one of the important reason to use AngularJS.

Technically speaking, Routing allows you to connect the View and Controller dynamically based on requested URL. Just to let you know Routing is not the part of core AngularJS module and comes up with an additional package. To make your application work you need to enable ngRoute thereafter your conditional event should pass through routeprovider API. Here ngView Directive is responsible to print/render the content in your View. In AngularJS, routing is performed on the client side. 

There are several ways to perform routing in AngularJS, however, here we will discuss ngRoute in AngularJS.

Let see how to get the routing module. 

Visit AngularJS official website https://angularjs.org/ and click on DOWNLOAD ANGULARJS link.
 

Download AngularJS

Download AngularJS model box. After that click on Browse additional modules and you will be redirected to https://code.angularjs.org/1.6.7/ . Look for route modules in different format like angular-route.js, angular-route.min.js
 

Additional module download

Add ng-route to your script either by pointing to https://code.angularjs.org/1.6.7/angular-route.js or minified version of angular-route.jS or download locally and connect with your custom application.

Cutting straight to the chase. Using a codebase, we can pull the data from an external template and display it on hash path.

Note: All route available inside router are case sensitive. Make sure to use as it is. In case you allow end users to use URL irrespective of the case sensitivity then use core parameter caseInsensitiveMatch. Make it as true and access the path without any issue instead of reflecting 404.

Codebase:

Below is the codebase for the template View. As you can see, we have added minimal code for simplification. In Angular application, there are three javascript files where 1st is a minified version of Angular as we do for all Angular applications. The second file is an additional minified version of the route module, which is not a part of core AngularJS package. The third one is custom JS file where we write custom logic and extend routeprovider API.

angular-route.html


aroute.js


Output:

In the below output “Angular Page with route”, the response is coming from the template and getting injected into ng-view. The output works when user try to access the URL [../angular-route.html#/about] as mentioned. It doesn’t reload the page and injects the o/p inside ng-view, acting as a local application by loading page without Page refresh. 

Do inspect the element and enable Firebug to check proper formatting and the way data is getting rendered.
 

AngularJS route output


Similarly, you can add multiple paths under $routeProvider.

Sourcecode:

Here we have added one more route and a templateUrl option that will fetch the file from provided location and render in the View. 
    

Carrer.htm

 

<p>Angular Page with route with templateUrl</p>

Output: 
 

AngularJS route with templateURL

So far we have used template and templateURL. I believe, by now, you will be confident enough to use Routing. Moving to the next level we will add Controller in the Route property so that the View respond accordingly. That is Angular router under the controller. It helps in assigning individual controller for specific routes.


In the above source code, we have added controller under career routing to perform business logic and transfer the response to career.htm.

Career.htm

<div>
     {{result}}
</div>

This is how we render the data in the View. Here data is retrieved from the Scope and bound to the View. The best part is that the browser executes this templateURL only once and rest of the time request is being cached. 

To know more, try using Network tab in Firebug by sending the same request multiple times. Just to add, the external file is loaded only once. Below is the screenshot of Network tab.
 

AngularJS route output1

Routing provides several ways to handle default route (/), which is nothing but to render view/HTML when we request to the default path. It’s simple.

    .when('/', {
        template : '<p>Angular Home Page with route</p>'
    })

By hitting the default URL it will go to (/) in your browser path so if you don’t write anything for route location then it will go to the default path which is (/). 

AngularJS route output2

AngularJS route output3
What If we don’t have any valid URL, how can we handle that. Here .otherwise does trick if a user tries to visit the page, which is not available in Routing configuration page. In such cases, you can handle the exception by redirecting or by showing a meaningful message.


.otherwise ({
    template: '<p>Choose item from link.</p>'
})

 

AngularJS route output3


Moving to another attribute under routing is redirectTo.

Quite a times we come across a situation where we don’t want to change the URL as it is user-friendly and want to maintain the URL pattern for future reference instead of changing them from the backend. This scenario seems really painful for bigger application when you don’t know where it will impact. 
 
The solution is redirectTo that allows you to redirect users from existing path. 

.when('/location', {
                 redirectTo: '/career'
})
  
 
In the above source code, location has redirected to [/carrer] route in each and every page request. We can also redirect based on the condition.

.when('/location', {
redirectTo: function () {
                     console.log(‘path redirection’);
                     return "/";
    }
 })

Here we are redirecting based on certain condition and returning to the default path.
 

AngularJS route output4

I believe this part of the series is enough to start with AngularJS Routing. So far we have seen different attributes of Routing techniques like how to fetch data from the template and the View from an external file. Redirection with functional logic, default path handling, invalid path handling, routing path case sensitivity handling etc. You will be able to create and use them within your own custom AngularJS Application. 

 

Understanding PHPUnit and How to write Unit test cases

Every developer knows how painful bugs can be, especially in the production stage as it takes hours of hard work. Though the development team always give their best to work out the bugs in the development process, a number of bugs creep in the code. So what could be done to fix these bugs and eliminate the repetitive task of manual testing? Here one way is to go for Unit Testing - a well-known methodology to write unit test cases in PHP. 

PHPUnit is a programmer-oriented testing framework. This is the outstanding testing framework for writing Unit tests for PHP Web Applications. With the help of PHPUnit, we can direct test-driven improvement.

RelatedHow to Write PHP Unit Tests for Drupal 8

Before diving into PHPUnit, let’s have a look at types of testing.

Types of Testing

Testing is about verifying a product to find out whether it meets specified requirements or not. Typically, there are four types of testing:

  1. Unit Testing
  2. Functional Testing
  3. Integration Testing
  4. Acceptance Testing

Unit Testing: Analysing a small piece of code is known as Unit Testing. Each unit test targets a unit of code in isolation. Unit testing should be as simple as possible, and it should not be depended on another functions/classes. 

Functional Testing: Testing based on functional requirements/specifications is called functional testing. Here we check given tests providing the same output as required by the end-user.

Integration Testing: It is built on top of Unit Testing. In Integration testing,  we combine two units together and check whether the combination works correctly or not. The purpose of this testing is to expose faults in the interaction between integrated units.

Acceptance Testing: This is the last phase of the testing process. Here we check the behavior of whole application from users side. End users insert the data and check the output whether it meets the required specifications or not. They just check the flow, not the functionality.

RelatedUnit Testing improves your product quality with 9 ways

Why write Unit Tests

One of the main benefits of writing Unit tests is that it reduces bugs on new and existing features. The Unit Testing identifies defect before the code is sent for integration testing. It also improves the design. By unit testing, we can find the bugs in an early stage that will eventually reduce the cost of bug fixings. It also allows developers to refactor code or upgrade system. Further, it makes development faster and improves the quality of the code.

PHPUnit: Writing unit tests manually and running them often take more time. For this, we need an automation tool like Selenium. PHPUnit is currently the most popular PHP unit testing framework.

It provides various features like mocking objects, code coverage analysis, logging etc. It belongs to xUnit libraries. You can use these libraries to create automatically executable tests, which verifies your application behavior.

Installing PHPUnit (Prerequisites)

  • Use the latest version of PHP.
  • PHPUnit requires dom, JSON, pcre, reflection and spl extensions, which are enabled by default.

Installation (Command line interface)

Download PHP Archive (PHAR) to obtain PHPUnit. To install PHAR globally, we can use the following commands in command line.

$ wget https://phar.phpunit.de/phpunit-6.5.phar
$ chmod +x phpunit-6.5.phar
$ sudo mv phpunit-6.5.phar /usr/local/bin/phpunit
$ phpunit --version

Via Composer

If you have installed composer in your system you can download it by using the single command.

composer require --dev phpunit/phpunit

There are a lot of assertions and annotations methods available in PHPUnit, let’s have a look on some we can use. 

Assertions

PHPUnit assertion methods are regular methods that return either true or false after evaluating the code you have passed.

assertEmpty(mixed $actual[,string $message=’ ’])  It return an error if $actual is not empty.

Example:
 Output:

PHPUnit Assertion


The Test is failed because the array is not empty.

assertEquals(mixed $expected,mixed $actual[,string $message=’ ’])

It gives an error when $expected is not equal to $actual. If $expected equals $actual then it returns true.

Example:

Output:

PHPUnit Assertion 2

This is failed because 1 is not equal to 0 and bar is not equal to baz.

Annotations


@dataProvider
Arbitrary arguments are accepted by the test method. These arguments are provided by data provider method, which is a public and returns an array of arrays or objects. We can specify the data provider method by @dataProvider annotation.

Example For Data provider:

Git link:
 OutPut:

PHPUnit Annotation

In the above code snippet, addition Provider is data Provider. We can use one provider as many times as we want.

@depends()

PhpUnit supports explicit dependencies between test methods. By using @depends annotations, we will be depended on test methods.

Example: 

Output:

PHP unit

In the above example, we are declaring one variable in testEmpty() and using the same variable in dependency methods. testPush() method depends on testEmpty() as the outcome can be used in  testPush() method. 

The class of the tests goes into ClassTest. This class inherits from PHPUnit\Framework\TestCase. Test methods are public and every method should start with test*. Inside these methods, we use assertion methods. Note that annotations are used before the method.

PHPUnit @depends


Setup() and TearDown() methods:

We can share the code for every test method. Before running every test method, setup() template method is invoked. setup () method creates objects which we test. After every test method running whether it is failed or successful, teardown template method is invoked. teardown() template method clean the objects.

Example for Setup() and TearDown methods:

In the above example, we are declaring one instance variable $name and using that code in all other test methods.

If the setup() method code differs slightly, then change the differ code in the test method. If you want different setups for test methods, then you need another test case class. At this point, we’re ready to begin building PHPUnit to make writing unit tests easier and improve software Quality.

Hope you find this “Introduction to Unit testing” helpful. Unit Testing is a vast topic. Here I have given you a brief introduction so that you can start writing your own tests. Please comment below if you have any question or suggestions.

Below is the given presentation on "Getting Started With PHPUnit Testing".

Download the Drupal Guide
Enter your email address to receive the guide.
get in touch