Well the title was a hyperbole. Now that I have got your attention let us get started. It might be stretch as of today that we can kill twitter. But in this post I would like to show that it many not be impossible after all, at-least in a couple of years.
A few things to know before we start killing twitter.
It starts with realising the you are doing a favour to twitter and twitter is not doing a favour to you. Yes I agree that twitter has been a great tool and it even led to many Arab Spring. Checkout Social Media Made the Arab Spring, But Couldn't Save Itforfurther details.
But we need to realise that while these are the pleasant side-effects of twitter/social media, for a service or business to be sustainable it has to be profitable or at-least should have the profit generating potential in the future. Irrespective of whether the services is following a ad revenue based model or freemium model one thing is in common. Either you have to pay up for the services or the service needs to sell something to somebody.
Understanding what is that something that is sold and to whom it is sold is important.
Let us start with the most quoted quote regarding the free services or seemingly free services.
Most of the social media users forget the value they are adding to the networks. It is easier for us to see a blog post or a video as as data/content. But we fail to realise that even the short status updates that we do on and our comments on them in social media websites are also content.
Every action that we do on social media is valuable and it adds to the valuation of the platform. How much is that action valued and how is it valued requires a detailed analysis (Will be following up this post with couple of related posts about this topic). But for now let us understand this much.
Every action that we do on a social media website falls into one of the following categories.
Content creation
Content curation
Content Distribution
Training the AI models.
I have tried to highlight the same in this tweet of mine.
It is difficult for people to understand this as they cannot see it clearly or rather there is no way for them to understand this. It only becomes clear in some conversations like the following. In February this year when aantonop was complaining about how facebook was locking him out, one of the users mentioned this.
Anton’s reply was interesting.
So it brings us to the question who is benefitting from whom. Is the platform benefitting from the user or is the user benefitting from the platform. At the least it is a synergy between the platform and user. At worst the platform is ripping of your data and making a hell lot of money while not rewarding you in anyway.
What is your data worth?
Data and the value it creates is has different lifetimes and there are lot of overlaps. So it is difficult to put a value to it. Let use a very crude way to identify the average minimum value of our data on Facebook. Facebook is valued at 600 Billion USD today. There are around 2 billion users on Facebook. Since Facebook makes money primarily by showing ads or/and selling your data :P , data created by each user should be worth at-least 300 USD.
One thing that everybody seems to agree is that data is the new oil and it is valuable. But what most of us fail to understand is that oil has a single lifecycle but whereas data has multiple life-cycles. So any valuation you put to a data piece is only a moving value that is affected by various parameters. We also need to realise that data that we consider archived or stale also revenue generating potential in the future. AI models will need a lot of data going forward and will unlock the revenue generating potential of your data. In the following article you can checkout how Bottos and Databroker DAO or unlocking the potential of data from various sources.
The two ways to realise the true value of your data
There are two ways you will realise that your data is worth something.
One : Have somebody like Zuck sell your data and make billions in the process.
Two : look at the real money people make with data.
1. When your data is sold
Cambridge Analytics expose happened on March 17, 2018. This expose made it clear that the user targeting is not just for ads and can be used for much more. There were serious concerns about users’ privacy. The expose once again proved that privacy is dead. What is more disturbing is that experts expressed that this might have a serious effect on Facebook’s future valuations. But that turned out to be completely false. Can you spot the dip in Facebook marketcap because of this scandal? I have highlighted this in red circle for you towards the right end of the graph. This is what I would call “A major dip in the short term but a minor blip in the long term”. The quick correction back to the trend line only suggests that nobody takes privacy seriously any more.
Facebook Marketcap
2. When you look at real money people make with your data
I am sure that Andreas M. Antonopoulos knows the value of data. I am just taking this example as it was a high profile case where data created elsewhere was able to generate revenues in some other platform because of the data distribution. The interesting thing is that in this case the money made was being used for translating aantonop’s videos to other language. You can read more about it here.
The real aanntonop
Aantonop made the above post which can be called as “Proof of Identity” post verifying that he is the real aantonop. The post gathered a lot of attention and has rewards of 1449 USD. I just hope that Aantonop claims the amount one day and starts using Steem more frequently.
I took Aantonop’s example because he is very popular in the world of Bitcoin and his videos have helped many entrepreneurs to take a plunge into Bitcoin. His videos are proof that well made content has a long shelf life and has revenue generating potential even outside the platforms that the content was created in.
Now lets gets back to our original question.
How to kill twitter?
This might seem like an impossible proposition to many. Let us look at the reasons why it is difficult to kill twitter or facebook for that matter.
I don’t need another social network.
I first got to know about Robert Scoble from Google+ days. I invited him to checkout Steemit platform and he replied with “I don’t need another social network.” Today we are in age where we have a social media overload. The new social media platforms needs to cross the critical mass for all the others to follow up. Replacing Facebook might be impossible for the next few years but we might have a chance to replace twitter with a decentralised version. Facebook has too much of a lead. It has your photos, videos, friends, memories, groups and pages. Any new entrant needs to address all these to overcome Facebook. Whereas with respect to twitter a limited feature set with additional benefits should be able to sway the needle in the new entrant’s favour.
So for now let us assume given enough motivation users might consider shifting to the new platform.
Twitter has first mover advantage
Twitter is huge. Twitter has first mover advantage. Yes that might be the case. But last year has proven the with right incentive models you can have a jumpstart. Binance became the fastest unicorn in history.
So don’t be surprised if a new entrant replaces bitter in less than a year.
Show me the money
Attributing a value to content is a tough task. There have many unsuccessful attempts in the past. I think Steem blockchain has come further than any other attempts. By incentivising both content creation and content curation steem has figured out a subjective way to attribute value to content. With the release of SMTs later this year the community will only get better at arriving at closer estimations for the value of posts. When people were told that their content is worth something they were not able to relate to it. With platforms like Steem having put definitive value to content and having paid the same to the content creators (which many have en-cashed to to FIAT) the idea is more palpable now. Monetary incentives can do wonders and as more people get to know about these platforms the effect will only get compounded.
Hitting the critical mass
To be a serious contender to twitter the new platform needs to hit the critical mass. This can be the real challenge. So here are the things that can be done.
Create a distributed cryptocurrency on the lines of Steem (Especially the rewards mechanism part.) Keep the interface, UX and restrictions(like number of characters) very similar to twitter. So that people feel at home ;)
In addition to the normal account creation have a preserved namespace twitter-[twitter-handle]. This will be reserved for creating one to one mapping of user accounts from twitter to the new blockchain.
The user accounts for each user on twitter are also created on the new platform. Both username and passwords(private keys) will be created. Twitter users can claim their password by sending a tweet to twitter handle of new blockchain. The password or private keys will be dm’ed to users.
Since all tweets are public duplicate them in the new platform under the users accounts. If that is a stretch then it can be started with latest tweets of popular accounts and then it can be expanded slowly.
The beta users will have access to popular content on the new platform. Their retweets and likes of tweets will decide the value of the new tweets mirrors from twitter.
While users might be hesitant to create new accounts I think there will be very few people who will not be happy to claim their accounts. Especially when they know that there are rewards waiting for them to en-cash for the content they have created.
The incentive or the rewards to be received on the new platform will be bigger for the users with huge number of followers. (Assuming that their content is also liked by the beta users on the new platform). So if these influencers move to the new platform, they will also bring along at-least some part of their followers.
Considering that the content on blockchain will be censor resistant and it rewards good content the platform should be able to take of hit the critical mass very soon.
I am not sure what will be the legal issues surrounding an attempt like these. But I think this is something definitely worth trying. A few crypto-millionaires coming together should have enough funds to try something like this. What do you think? Will an attempt like this work? Share your thoughts.
I am making you a pinky promise that by the end of this post you will know JavaScript Promises better.
I have had a kind of “love and hate” relationship with JavaScript. But nevertheless JavaScript was always intriguing for me. Having worked on Java and PHP for the last 10 years, JavaScript seemed very different but intriguing. I did not get to spend enough time on JavaScript and have been trying to make up for it of late.
Promises was the first interesting topic that I came across. Time and again I have heard people saying that Promises saves you from Callback hell. While that might have been a pleasant side-effect, there is more to Promises and here is what I have been able to figure out till now.
Background
When you start working on JavaScript for the first time it can be a little frustrating. You will hear some people say that JavaScript is synchronous programming language while others claim that it is asynchronous. You hear blocking code, non blocking code, event driven design pattern, event life cycle, function stack, event queue, bubbling, polyfill, babel, angular, reactJS, vue JS and a ton of other tools and libraries. Fret not. You are not the first. There is a term for that as well. It is called JavaScript fatigue. You should check out the following article. There is a reason this post got 42k claps on Hackernoon :)
JavaScript is a synchronous programming language. But thanks to callback functions we can make it function like Asynchronous programming language.
Promises for layman
Promises in JavaScript are very similar to a promise in real life. So first let us look at promises in real life first. The definition of a promise from the dictionary is as follows
promise : noun : Assurance that one will do something or that a particular thing will happen.
So what happens when somebody makes you a promise?
A promise gives you an assurance that something will be done. Whether they(who made the promise) will do it themselves or will they get it done by others is immaterial. They give you an assurance based on which you can plan something.
A promise can either be kept or broken.
When a promise is kept you expect something out of that promise which you can make use of for your further actions or plans.
When a promise is broken, you would like to know why the person who made the promise was not able to keep up his side of the bargain. Once you know the reason and have a confirmation that the promise has been broken you can plan what to do next or how to handle it.
At the time of making a promise all we have is only an assurance. We will not be able to act on it immediately. We can decide and formulate what needs to be done when the promise is kept (and hence we have expected outcome) or when the promise is broken (we know the reason and hence we can plan a contingency).
There is a chance that you may not hear back from the person who made the promise at all. In such cases you would prefer to keep a time threshold. Say if the person who made the promise doesn’t come back to me in 10 days I will consider that he had some issues and will not keep up his promise. So even if the person comes back to you after 15 days it doesn’t matter to you any more as you have already made alternate plans.
Promises in JavaScript
As a rule of thumb for JavaScript I always read documentation from MDN Web Docs. Of all the resources I think they provide the most concise details. I read up the Promises page form MDSN Web Docs and played around with code to get a hang of it.
There are two parts to understanding promises. Creation of promises and Handling of promises. Though most of our code will generally cater to handling of promises created by other libraries a complete understanding will help us for sure and understanding of creation of promises is equally important once you cross the beginner stage.
Creation of Promises
Let us look at the signature for creating a new promise.
The constructor accepts a function called executor. This executor function accepts two parameters resolve and reject which are in turn functions. Promises are generally used for easier handling of asynchronous operations or blocking code, examples for which being file operations, API calls, DB calls, IO calls etc.The initiation of these asynchronous operations are initiated within the executorfunction. If the asynchronous operations are successful then the expected result is returned by calling the resolvefunction by the creator of the promise. Similarly if there was some unexpected error the reasons is passed on by calling the rejectfunction.
Now that we know how to create a promise. Let us create a simple promise for our understanding sake.
var keepsHisWord;
keepsHisWord = true;
promise1 = new Promise(function(resolve, reject) {
if (keepsHisWord) {
resolve("The man likes to keep his word");
} else {
reject("The man doesnt want to keep his word");
}
});
console.log(promise1);
Every promise has a state and value
Since this promise gets resolved right away we will not be able to inspect the initial state of the promise. So let us just create a new promise that will take some time to resolve. The easiest way for that is to use the setTimeOut function.
promise2 = new Promise(function(resolve, reject) {
setTimeout(function() {
resolve({
message: "The man likes to keep his word",
code: "aManKeepsHisWord"
});
}, 10 * 1000);
});
console.log(promise2);
The above code just creates a promise that will resolve unconditionally after 10 seconds. So we can checkout the state of the promise until it is resolved.
state of promise until it is resolved or rejected
Once the ten seconds are over the promise is resolved. Both PromiseStatus and PromiseValue are updated accordingly. As you can see we updated the resolve function so that we can pass a JSON Object instead of a simple string. This is just to show that we can pass other values as well in the resolve function.
A promise that resolves after 10 seconds with a JSON object as returned value
Now let us look at a promise the will reject. Let us just modify the promise 1 a little for this.
keepsHisWord = false;
promise3 = new Promise(function(resolve, reject) {
if (keepsHisWord) {
resolve("The man likes to keep his word");
} else {
reject("The man doesn't want to keep his word");
}
});
console.log(promise3);
Since this will create a unhanded rejection chrome browser will show an error. You can ignore it for now. We will get back to that later.
rejections in promises
As we can see PromiseStatus can have three different values. pendingresolved or rejected When promise is created PromiseStatuswill be in the pending status and will have PromiseValue as undefined until the promise is either resolved or rejected. When a promise is in resolved or rejected states, a promise is said to be settled. So a promise generally transitions from pending state to settled state.
Now that we know how promises are created we can look at how we can use or handle promises. This will go hand in hand with understanding the Promise object.
Understanding promises Object
As per MDN documentation
The Promise object represents the eventual completion (or failure) of an asynchronous operation, and its resulting value.
Promise object has static methods and prototype methodsStatic methods in Promise object can be applied independently, whereas the prototype methods needs to be applied on the instances of Promise object. Remembering that both normal methods and prototypes all return a Promise makes it much easier to make sense of things.
Prototype Methods
Let us first start with the prototype methods There are three of them. Just to reiterate remember that all these methods can be applied on an instance of Promise object and all these methods return a promise in turn. All the following methods assigns handlers for different state transitions of a promise. As we saw earlier when a Promise is created it is in pending state. One or more of the following three methods will be run when a promise is settled based on whether they are fulfilled or rejected.
Promise.prototype.catch(onRejected)
Promise.prototype.then(onFulfilled, onRejected)
Promise.prototype.finally(onFinally)
The below image shows the flow for .then and .catch methods. Since they return a Promise they can be chained again which is also shown in the image. If .finally is declared for a promise then it will be executed whenever a promise is settled irrespective of whether it is fulfilled or rejected.
Here is a small story. You are a school going kid and you ask your mom for a phone. She says “I will buy a phone for this month end.”
Let us look at how it will look in JavaScript if the promise gets executed at the end of the month.
var momsPromise = new Promise(function(resolve, reject) {
momsSavings = 20000;
priceOfPhone = 60000;
if (momsSavings > priceOfPhone) {
resolve({
brand: "iphone",
model: "6s"
});
} else {
reject("We donot have enough savings. Let us save some more money.");
}
});
momsPromise.then(function(value) {
console.log("Hurray I got this phone as a gift ", JSON.stringify(value));
});
momsPromise.catch(function(reason) {
console.log("Mom coudn't buy me the phone because ", reason);
});
momsPromise.finally(function() {
console.log(
"Irrespecitve of whether my mom can buy me a phone or not, I still love her"
);
});
The output for this will be.
moms failed promise.
If we change the value of momsSavings to 200000 then mom will be able to gift the son. In such case the output will be
mom keeps her promise.
Let us wear the hat of somebody who consumes this library. We are mocking the output and nature so that we can look at how to use then and catch effectively.
Since .then can assign bothonFulfilled, onRejected handlers , instead of writing separate .then and .catch we could have done the same with with .then It would have looked like below.
momsPromise.then(
function(value) {
console.log("Hurray I got this phone as a gift ", JSON.stringify(value));
},
function(reason) {
console.log("Mom coudn't buy me the phone because ", reason);
}
);
But for readability of the code I think it is better to keep them separate.
To make sure that we can run all these samples in browsers in general or chrome in specific I am making sure that we do not have external dependencies in our code samples. To better understand the further topics let us create a function that will return a promise which will be resolved or rejected randomly so that we can test out various scenarios. To understand the concept of asynchronous functions let us introduce a random delay also into our function. Since we will need random numbers let us first create a random function that will return a random number between x and y.
function getRandomNumber(start = 1, end = 10) {
//works when both start,end are >=1 and end > start
return parseInt(Math.random() * end) % (end-start+1) + start;
}
Let us create a function that will return a promise for us. Let us call for our function promiseTRRARNOSG which is an alias for promiseThatResolvesRandomlyAfterRandomNumnberOfSecondsGenerator. This function will create a promise which will resolve or reject after a random number of seconds between 2 and 10. To randomise rejection and resolving we will create a random number between 1 and 10. If the random number generated is greater 5 we will resolve the promise, else we will reject it.
function getRandomNumber(start = 1, end = 10) {
//works when both start and end are >=1
return (parseInt(Math.random() * end) % (end - start + 1)) + start;
}
var testProimse = promiseTRRARNOSG();
testProimse.then(function(value) {
console.log("Value when promise is resolved : ", value);
});
testProimse.catch(function(reason) {
console.log("Reason when promise is rejected : ", reason);
});
// Let us loop through and create ten different promises using the function to see some variation. Some will be resolved and some will be rejected.
for (i=1; i<=10; i++) {
let promise = promiseTRRARNOSG();
promise.then(function(value) {
console.log("Value when promise is resolved : ", value);
});
promise.catch(function(reason) {
console.log("Reason when promise is rejected : ", reason);
});
}
Refresh the browser page and run the code in console to see the different outputs for resolve and reject scenarios. Going forward we will see how we can create multiple promises and check their outputs without having to do this.
Static Methods
There are four static methods in Promise object.
The first two are helpers methods or shortcuts. They help you create resolved or rejected promises easily.
Promise.reject(reason)
Helps you create a rejected promise.
var promise3 = Promise.reject("Not interested");
promise3.then(function(value){
console.log("This will not run as it is a resolved promise. The resolved value is ", value);
});
promise3.catch(function(reason){
console.log("This run as it is a rejected promise. The reason is ", reason);
});
Promise.resolve(value)
Helps you create a resolved promise.
var promise4 = Promise.resolve(1);
promise4.then(function(value){
console.log("This will run as it is a resovled promise. The resolved value is ", value);
});
promise4.catch(function(reason){
console.log("This will not run as it is a resolved promise", reason);
});
On a sidenote a promise can have multiple handlers. So you can update the above code to
var promise4 = Promise.resolve(1);
promise4.then(function(value){
console.log("This will run as it is a resovled promise. The resolved value is ", value);
});
promise4.then(function(value){
console.log("This will also run as multiple handlers can be added. Printing twice the resolved value which is ", value * 2);
});
promise4.catch(function(reason){
console.log("This will not run as it is a resolved promise", reason);
});
And the output will look like.
The next two methods helps you process a set of promises. When you are dealing with multiple promises it is better to create an array of promises first and then do the necessary action over the set of promises. For understanding these methods we will not be able to use our handy promiseTRRARNOSG as it is too random. It is better to have some deterministic promises so that we can understand the behaviour. Let us create two functions. One that will resolve after n seconds and one that will reject after n seconds.
var promiseTRSANSG = (promiseThatResolvesAfterNSecondsGenerator = function(
n = 0
) {
return new Promise(function(resolve, reject) {
setTimeout(function() {
resolve({
resolvedAfterNSeconds: n
});
}, n * 1000);
});
});
var promiseTRJANSG = (promiseThatRejectsAfterNSecondsGenerator = function(
n = 0
) {
return new Promise(function(resolve, reject) {
setTimeout(function() {
reject({
rejectedAfterNSeconds: n
});
}, n * 1000);
});
});
Now let us use these helper functions to understand Promise.All
Promise.All
As per MDN documentation
The Promise.all(iterable) method returns a single Promise that resolves when all of the promises in the iterable argument have resolved or when the iterable argument contains no promises. It rejects with the reason of the first promise that rejects.
Case 1 : When all the promises are resolved. This is the most frequently used scenario.
console.time("Promise.All");
var promisesArray = [];
promisesArray.push(promiseTRSANSG(1));
promisesArray.push(promiseTRSANSG(4));
promisesArray.push(promiseTRSANSG(2));
var handleAllPromises = Promise.all(promisesArray);
handleAllPromises.then(function(values) {
console.timeEnd("Promise.All");
console.log("All the promises are resolved", values);
});
handleAllPromises.catch(function(reason) {
console.log("One of the promises failed with the following reason", reason);
});
All promises resolved.
There are two important observations we need to make in general from the output.
First : The third promise which takes 2 seconds finishes before the second promise which takes 4 seconds. But as you can see in the output, the order of the promises are maintained in the values.
Second : I added a console timer to find out how long Promise.All takes. If the promises were executed in sequential it should have taken 1+4+2=7 seconds in total. But from our timer we saw that it only takes 4 seconds. This is a proof that all the promises were executed in parallel.
Case 2 : When there are no promises. I think this is the least frequently used.
console.time("Promise.All");
var promisesArray = [];
promisesArray.push(1);
promisesArray.push(4);
promisesArray.push(2);
var handleAllPromises = Promise.all(promisesArray);
handleAllPromises.then(function(values) {
console.timeEnd("Promise.All");
console.log("All the promises are resolved", values);
});
handleAllPromises.catch(function(reason) {
console.log("One of the promises failed with the following reason", reason);
});
Since there are no promises in the array the returning promise is resolved.
Case 3 : It rejects with the reason of the first promise that rejects.
console.time("Promise.All");
var promisesArray = [];
promisesArray.push(promiseTRSANSG(1));
promisesArray.push(promiseTRSANSG(5));
promisesArray.push(promiseTRSANSG(3));
promisesArray.push(promiseTRJANSG(2));
promisesArray.push(promiseTRSANSG(4));
var handleAllPromises = Promise.all(promisesArray);
handleAllPromises.then(function(values) {
console.timeEnd("Promise.All");
console.log("All the promises are resolved", values);
});
handleAllPromises.catch(function(reason) {
console.timeEnd("Promise.All");
console.log("One of the promises failed with the following reason ", reason);
});
Execution stopped after the first rejection
Promise.race
As per MDN documention
The Promise.race(iterable) method returns a promise that resolves or rejects as soon as one of the promises in the iterable resolves or rejects, with the value or reason from that promise.
Case 1 : One of the promises resolves first.
console.time("Promise.race");
var promisesArray = [];
promisesArray.push(promiseTRSANSG(4));
promisesArray.push(promiseTRSANSG(3));
promisesArray.push(promiseTRSANSG(2));
promisesArray.push(promiseTRJANSG(3));
promisesArray.push(promiseTRSANSG(4));
var promisesRace = Promise.race(promisesArray);
promisesRace.then(function(values) {
console.timeEnd("Promise.race");
console.log("The fasted promise resolved", values);
});
promisesRace.catch(function(reason) {
console.timeEnd("Promise.race");
console.log("The fastest promise rejected with the following reason ", reason);
});
fastest resolution
All the promises are run in parallel. The third promise resolves in 2 seconds. As soon as this is done the promise returned by Promise.race is resolved.
Case 2: One of the promises rejects first.
console.time("Promise.race");
var promisesArray = [];
promisesArray.push(promiseTRSANSG(4));
promisesArray.push(promiseTRSANSG(6));
promisesArray.push(promiseTRSANSG(5));
promisesArray.push(promiseTRJANSG(3));
promisesArray.push(promiseTRSANSG(4));
var promisesRace = Promise.race(promisesArray);
promisesRace.then(function(values) {
console.timeEnd("Promise.race");
console.log("The fasted promise resolved", values);
});
promisesRace.catch(function(reason) {
console.timeEnd("Promise.race");
console.log("The fastest promise rejected with the following reason ", reason);
});
fastest rejection
All the promises are run in parallel. The fourth promise rejected in 3 seconds. As soon as this is done the promise returned by Promise.race is rejected.
I have written all the example methods so that I can test out various scenarios and tests can be run in the browser itself. That is the reason you don’t see any API calls, file operations or database calls in the examples. While all of these are real life example you need additional effort to set them up and test it. Whereas using the delay functions gives you similar scenarios without the burden of additional setup. You can easily play around with the values to see and checkout different scenarios. You can use the combination of promiseTRJANSG, promiseTRSANSG and promiseTRRARNOSG methods to simulate enough scenarios for a thorough understanding of promises. Also use of console.time methods before and after relevant blocks will help us identify easily if the promises are run parallelly or sequentially . Let me know if you have any other interesting scenarios or if I have missed something. If you want all the code samples in a single place check out this gist.
Bluebird has some interesting features like
Promise.prototype.timeout
Promise.some
Promise.promisify
We will discuss these in a separate post.
I will also be writing one more post about my learnings from async and await.
Before closing I would like to list down all the thumb rules I follow to keep my head sane around promises.
Use promises whenever you are using async or blocking code.
resolve maps to then and reject maps to catch for all practical purposes.
Make sure to write both .catch and .then methods for all the promises.
If something needs to be done in both the cases use .finally
We only get one shot at mutating each promise.
We can add multiple handlers to a single promise.
The return type of all the methods in Promise object whether they are static methods or prototype methods is again a Promise
In Promise.all the order of the promises are maintained in values variable irrespective of which promise was first resolved.
CoinMarketCap(CoinMarketCap) has some global charts which help you get insights into the overall cryptocurrency markets. You can find them on https://coinmarketcap.com/charts/. I was particularly interested in the dominance chart as I was trying to analyze how Bitcoin and Altcoin dominance affects markets and what role it played in important dates in the last year.
Recently when I was trying to do sampling from the above Graph for an article about “Bitcoin Dominance and the Rise of Others” -https://medium.com/@gokulnk/bitcoin-dominance-and-the-emergence-of-others-64a7996272ad it was taking a lot of time to get data and it was really irritating. I had to mouseover on the graph, copy the data manually into the Medium article I was writing. I was using MAC split-screen for the same and it was not easy to switch the focus between the split screens. It only added to the frustration. Comment if you know how to do it.
So I set out to write a small script to fetch the data. Though the script took a little longer than I expected, I think I will save a lot of time going forward whenever I want to do sampling. I am putting out the script so that others also can use it.
Just visit the page https://coinmarketcap.com/charts/ and copy-paste the following code into the console to get the relevant data. You can also edit coinsForDominanceand datesForDominancevariables to get the data that you need.
As a Drupal developer, you must have heard the phrase “Headless Drupal”, wondering what exactly it is. And how it is different than standard Drupal. No worries! We will take a brief look at the various facets of Headless Drupal and how to implement Rest API in Drupal programmatically as well as through view method. We will also explore how to integrate Drupal with AngularJS. Let's try to understand.
In short, Headless Drupal is nothing but a front-end framework decoupled from the backend that stores the data. Here, the front-end is responsible for what to display and requests data from Drupal as needed. In this, users interact with front-end framework rather than backend CMS. Further, instead of splitting the HTML, Drupal provides the data in JSON format to the front-end framework like AngularJS or embed js or react.js etc.
Cutting a long story short, how headless web works?
First, let’s see the flow of headless Drupal and how to integrate front-end framework.
Static web: Static Html page directly interacts with the browser and not with backend framework
CMS web: Here, DB content and PHP logic interact with the browser.
Headless web: Front-end framework plays a crucial role between php logic and browser. Here we use API to fetch the data from CMS to write logic which is shown in the browser.
Implementing Rest API in Drupal
In order to display data in front-end framework, we need to create a REST API plugin that will assist to fetch the data from Drupal.
Notably, there are two ways to create rest API plugin in Drupal 8:
Programmatically
Views
Method 1: Programmatically
Step 1. Create custom module using Drupal console
Command: drupal generate:module
Step 2. Now generate rest API plugin with the help of Drupal console
Command: drupal generate:plugin:rest:resource
After creating a Rest API programmatically, you can see a folder structure similar to the below one.
Step 4. Move to path: admin/config/services/rest
Step 5. Enable and edit the configurations like method (get, post), format like json, xml , basic auth which Rest API we have created.
Step 6. We can access the API URL
Url format: /vbrest?_format=json
Note: Make sure we append query parameter ?_format=json
Now use the tool like postman to test whether the data is rendered or not.
Method 2: Using views
Step 1. Move to path: admin/structure/views
Step 2. Create a new view and make sure that we have enabled the checkbox to export the view as Rest API and specify the URL.
Step 3. After creating the View, define the configuration for the various formats like json, hal_json ,xml etc and which fields are required to be generated in API.
Step 4. The view is created successfully. Access the API by its URL using postman to get the result.
Now we are ready with data which is generated from the Drupal. Here, we can see how to fetch this data through Rest API.
Integrating Drupal with AngularJS:
As we all know AngularJs is an open-source front-end framework that helps to develop single page applications, dynamic web apps etc.
Follow the below steps to develop a web page using Angular:
Create a folder(angularrest) inside the Drupal(d8) docroot.
Now we can create a file say like . index.html.
We can write our logic to fetch the data and to display it.
Now we can the see the output, by accessing the
url: localhost/d8/angularrest/index.html
Sample output:
That’s it! Now you know how to integrate headless Drupal with AngularJS. Go ahead try it on your web application and see how it works for you. Here I have briefed headless Drupal, implementation of Rest API in Drupal, how to create Rest API programmatically and using View method. And finally, integrating Drupal with AngularJS.
Below given is the presentation on "Integrating Headless Drupal with AngularJS".
In Search API, there is a field for a search excerpt that you can use on field views to highlight search results. In this article, I’m going to show you how to enable excerpt and set it using views. Here I’m assuming that you have already set the Search API module and has a Search API Solr view.
Follow the steps:
Go to Manage -> Configuration -> Search and Metadata -> Search API.
Edit your Search API Solr view. You can display highlighted results only if your view is displaying fields. However, if you need to build a custom view based search_api search that renders entities instead of using fields, the excerpt info stays hidden in the view result array.
Click on Add fields and select the excerpt field
You can add other fields along with Excerpt field as per your requirements. Save the view and check the search results, You will be able to see highlighted output!!
Hope now you know how to highlight search results in Search API Solr View for Drupal 8 website. If you have any suggestions or queries please comment below let me try to answer.
So far we have gone through a series of AngularJS components, such as Data binding methods, Modules & Controller, Filter and Custom directives In this blog, we will discuss Routings techniques that will be followed by other components, such as Custom Directives, Scope, Services and others. So let’s talk about Routing. As the name defines, Routing means path. It allows developers to use Views & Controllers based on path match. It’s simple and straightforward. Check out how?
Look for the path (hash path) i.e. search for the Triggered path
Get the content from that path i.e. from View/HTML
Return response back to the View by injecting into HTML or by manipulating DOM.
Routing plays a critical role when we make configurations for the custom application to render specific content or get the content from particular URL based on the path matching. Further, it is helpful when you build SPA (Single Page Application) - one of the important reason to use AngularJS.
Technically speaking, Routing allows you to connect the View and Controller dynamically based on requested URL. Just to let you know Routing is not the part of core AngularJS module and comes up with an additional package. To make your application work you need to enable ngRoute thereafter your conditional event should pass through routeproviderAPI. Here ngView Directive is responsible to print/render the content in your View. In AngularJS, routing is performed on the client side.
There are several ways to perform routing in AngularJS, however, here we will discuss ngRoute in AngularJS.
Let see how to get the routing module.
Visit AngularJS official website https://angularjs.org/ and click on DOWNLOAD ANGULARJS link.
Download AngularJS model box. After that click on Browse additional modules and you will be redirected to https://code.angularjs.org/1.6.7/ . Look for route modules in different format like angular-route.js, angular-route.min.js.
Add ng-route to your script either by pointing to https://code.angularjs.org/1.6.7/angular-route.js or minified version of angular-route.jS or download locally and connect with your custom application.
Cutting straight to the chase. Using a codebase, we can pull the data from an external template and display it on hash path.
Note: All route available inside router are case sensitive. Make sure to use as it is. In case you allow end users to use URL irrespective of the case sensitivity then use core parameter caseInsensitiveMatch. Make it as true and access the path without any issue instead of reflecting 404.
Codebase:
Below is the codebase for the template View. As you can see, we have added minimal code for simplification. In Angular application, there are three javascript files where 1st is a minified version of Angular as we do for all Angular applications. The second file is an additional minified version of the route module, which is not a part of core AngularJS package. The third one is custom JS file where we write custom logic and extend routeprovider API.
angular-route.html
aroute.js
Output:
In the below output “Angular Page with route”, the response is coming from the template and getting injected into ng-view. The output works when user try to access the URL [../angular-route.html#/about] as mentioned. It doesn’t reload the page and injects the o/p inside ng-view, acting as a local application by loading page without Page refresh.
Do inspect the element and enable Firebug to check proper formatting and the way data is getting rendered.
Similarly, you can add multiple paths under $routeProvider.
Sourcecode:
Here we have added one more route and a templateUrl option that will fetch the file from provided location and render in the View.
Carrer.htm
<p>Angular Page with route with templateUrl</p>
Output:
So far we have used template and templateURL. I believe, by now, you will be confident enough to use Routing. Moving to the next level we will add Controller in the Route property so that the View respond accordingly. That is Angular router under the controller. It helps in assigning individual controller for specific routes.
In the above source code, we have added controller under career routing to perform business logic and transfer the response to career.htm.
Career.htm
<div>
{{result}}
</div>
This is how we render the data in the View. Here data is retrieved from the Scope and bound to the View. The best part is that the browser executes this templateURL only once and rest of the time request is being cached.
To know more, try using Network tab in Firebug by sending the same request multiple times. Just to add, the external file is loaded only once. Below is the screenshot of Network tab.
Routing provides several ways to handle default route (/), which is nothing but to render view/HTML when we request to the default path. It’s simple.
.when('/', {
template : '<p>Angular Home Page with route</p>'
})
By hitting the default URL it will go to (/) in your browser path so if you don’t write anything for route location then it will go to the default path which is (/).
AngularJS route output3
What If we don’t have any valid URL, how can we handle that. Here .otherwise does trick if a user tries to visit the page, which is not available in Routing configuration page. In such cases, you can handle the exception by redirecting or by showing a meaningful message.
.otherwise ({
template: '<p>Choose item from link.</p>'
})
Moving to another attribute under routing is redirectTo.
Quite a times we come across a situation where we don’t want to change the URL as it is user-friendly and want to maintain the URL pattern for future reference instead of changing them from the backend. This scenario seems really painful for bigger application when you don’t know where it will impact.
The solution is redirectTo that allows you to redirect users from existing path.
.when('/location', {
redirectTo: '/career'
})
In the above source code, location has redirected to [/carrer] route in each and every page request. We can also redirect based on the condition.
Here we are redirecting based on certain condition and returning to the default path.
I believe this part of the series is enough to start with AngularJS Routing. So far we have seen different attributes of Routing techniques like how to fetch data from the template and the View from an external file. Redirection with functional logic, default path handling, invalid path handling, routing path case sensitivity handling etc. You will be able to create and use them within your own custom AngularJS Application.
Every developer knows how painful bugs can be, especially in the production stage as it takes hours of hard work. Though the development team always give their best to work out the bugs in the development process, a number of bugs creep in the code. So what could be done to fix these bugs and eliminate the repetitive task of manual testing? Here one way is to go for Unit Testing - a well-known methodology to write unit test cases in PHP.
PHPUnit is a programmer-oriented testing framework. This is the outstanding testing framework for writing Unit tests for PHP Web Applications. With the help of PHPUnit, we can direct test-driven improvement.
Before diving into PHPUnit, let’s have a look at types of testing.
Types of Testing
Testing is about verifying a product to find out whether it meets specified requirements or not. Typically, there are four types of testing:
Unit Testing
Functional Testing
Integration Testing
Acceptance Testing
Unit Testing: Analysing a small piece of code is known as Unit Testing. Each unit test targets a unit of code in isolation. Unit testing should be as simple as possible, and it should not be depended on another functions/classes.
Functional Testing: Testing based on functional requirements/specifications is called functional testing. Here we check given tests providing the same output as required by the end-user.
Integration Testing: It is built on top of Unit Testing. In Integration testing, we combine two units together and check whether the combination works correctly or not. The purpose of this testing is to expose faults in the interaction between integrated units.
Acceptance Testing: This is the last phase of the testing process. Here we check the behavior of whole application from users side. End users insert the data and check the output whether it meets the required specifications or not. They just check the flow, not the functionality.
One of the main benefits of writing Unit tests is that it reduces bugs on new and existing features. The Unit Testing identifies defect before the code is sent for integration testing. It also improves the design. By unit testing, we can find the bugs in an early stage that will eventually reduce the cost of bug fixings. It also allows developers to refactor code or upgrade system. Further, it makes development faster and improves the quality of the code.
PHPUnit: Writing unit tests manually and running them often take more time. For this, we need an automation tool like Selenium. PHPUnit is currently the most popular PHP unit testing framework.
It provides various features like mocking objects, code coverage analysis, logging etc. It belongs to xUnit libraries. You can use these libraries to create automatically executable tests, which verifies your application behavior.
Installing PHPUnit (Prerequisites)
Use the latest version of PHP.
PHPUnit requires dom, JSON, pcre, reflection and spl extensions, which are enabled by default.
Installation (Command line interface)
Download PHP Archive (PHAR) to obtain PHPUnit. To install PHAR globally, we can use the following commands in command line.
It gives an error when $expected is not equal to $actual. If $expected equals $actual then it returns true.
Example:
Output:
This is failed because 1 is not equal to 0 and bar is not equal to baz.
Annotations
@dataProvider Arbitrary arguments are accepted by the test method. These arguments are provided by data provider method, which is a public and returns an array of arrays or objects. We can specify the data provider method by @dataProvider annotation.
Example For Data provider:
Git link: OutPut:
In the above code snippet, addition Provider is data Provider. We can use one provider as many times as we want.
PhpUnit supports explicit dependencies between test methods. By using @depends annotations, we will be depended on test methods.
Example:
Output:
In the above example, we are declaring one variable in testEmpty() and using the same variable in dependency methods. testPush() method depends on testEmpty() as the outcome can be used in testPush() method.
The class of the tests goes into ClassTest. This class inherits from PHPUnit\Framework\TestCase. Test methods are public and every method should start with test*. Inside these methods, we use assertion methods. Note that annotations are used before the method.
Setup() and TearDown() methods:
We can share the code for every test method. Before running every test method, setup() template method is invoked. setup () method creates objects which we test. After every test method running whether it is failed or successful, teardown template method is invoked. teardown() template method clean the objects.
Example for Setup() and TearDown methods:
In the above example, we are declaring one instance variable $name and using that code in all other test methods.
If the setup() method code differs slightly, then change the differ code in the test method. If you want different setups for test methods, then you need another test case class. At this point, we’re ready to begin building PHPUnit to make writing unit tests easier and improve software Quality.
Hope you find this “Introduction to Unit testing” helpful. Unit Testing is a vast topic. Here I have given you a brief introduction so that you can start writing your own tests. Please comment below if you have any question or suggestions.
Below is the given presentation on "Getting Started With PHPUnit Testing".
Personalization isn’t a new concept. The creator and project lead of Drupal, Dries himself believes that “personalization and contextualization are becoming critical building blocks in the future of the web.’’ Let’s elaborate the concept of personalization first.
Personalization is the tailoring of web content to match the user priorities. This concept revolves around finding a suitable method to enable the content delivery to users based on their preference as well as past behavior. So what steps we can follow to achieve an effective content personalization? Let’s dive in.
Understand the Queries:
Firstly, it’s important to know - what your customers are expecting from you. This may involve primary as well as secondary research, followed by an in-depth analysis, to understand the queries. This will help in persuading the users that the content you’ve provided is relevant and has utmost importance. Undoubtedly, knowing these queries can play a significant role in shaping the content structure you plan for enhancing customer engagement on your website.
Know your Target Audience:
This is yet another step that helps you to offer personalized content that best matches to your customers demand. A website can have thousands of anonymous visitors with their different usage pattern depending on their behavior, context, history and, filtration. Therefore, identifying your target audience should be one of the fundamental steps you must follow to achieve better content personalization goal.
Sort out your content types:
The next step is to sort out the content types according to the needs of the users individually. Let’s take an example.
Let’s assume, we have three users, say - A, B and, C. All these users have their own individual priorities and histories. Some amongst them may be accessing the content on a mobile while others on the web. And perhaps one may be interested in online shopping, others may like daily updates over your site and so on.
So, what do we do? All we need here is to classify the content according to consumer needs with the right personalization tools and align it to the portal accordingly.
Develop a Content Strategy:
Now, that you have a fair idea about your target audience and their personas, devise a strategy to map the contents of your portal. This must be based on the defined customer persona, content categorization and the user experience on your website. The content should be engaging to hook customers. Otherwise, an irrelevant or boring content can stifle their interest resulting in higher bounce rate.
Analyze the Market and Competition:
In the growing competition, it’s important to closely watch your competitors and monitor their activities regularly. Conducting a regular analysis helps you to find out what your competitors are doing to enhance brand awareness and generate new leads. Further, this activity should reflect on your strategy too. Here, a detailed market research can be used to develop and adopt more powerful optimization tools. Eventually, adopting these robust tools will help you in enhancing customer engagement and staying ahead.
Optimizing User Experience with Content Personalization
As discussed, there are various steps to be followed for an effective content personalization. Also, the scope of this has become wider with a host of options available in the market today based on various recommendations. Let’s have a look.
Web Analytics Integrated Personalization:
Adobe Target:Adobe Target enables you to deliver a personalized content based on real-time data. It automates the targeting process in order to reduce workload while enhancing conversions.
Google Optimize:Google optimize allows multivariate testing of your website to deliver a personalized experience to all customers and businesses. It can seamlessly integrate with Google Analytics and Bigquery, visual editing, experiment management, etc.
SaaS Tools for E-Commerce Personalization:
Bunting Personalization:Bunting helps in setting up personalized content that targets right visitors at the right time in their journey across channels and various touchpoints.
Apptus:Apptus combines big data and machine learning to constantly enable your e-commerce site’s exposure strategies, sales performance while minimizing the costs to drive organizational efficiency.
Personalization as a part of Web Development Platforms:
Magento: It is an open source e-commerce platform comes with various extensions for personalization such as UNBXD and Commerce Stack.
Episerver: Episerver’s intelligent personalization feature adapts to change visitor patterns and campaigns to allow automatic recommendations.
Acquia Lift: Lift merges anonymous and known visitor profiles by adaptively segmenting your content in real-time. It can be implemented using Drupal’s Acquia Lift Connector module.
Tools for Marketing Automation and Personalization:
Evergage:Evergage tracks all real-time interactions with your webpage to deliver a maximally relevant individualized experience. It comes with features such as A/B testing, cloud-based optimization, etc.
Sitespect Personalization: Sitespect helps companies deliver enhanced, personalized and engaging experience to site visitors.
Lytics Personalization: Using its unified customer profiles, Lytics helps you to personalize all your data. It targets visitors and combines intent data and your site’s behavioral and demographic data for a better experience.
Blueconic: Blueconic is a Customer Data Platform (CDP) that helps to translate customer insights into a personalized communication method by creating a dynamic profile of each customer. It quickly enhances all customer profiles and helps easy delivery of cross-channel personalization to drive perfect customer interaction.
Enterprise level Business Personalization tools:
Oracle Eloqua Personalization:Oracle Eloqua equips marketers with lead campaign management tools that help marketers engage the right audience at the right time in the buyer's journey while providing real-time insights.
Monetate:Monetate includes various options for personalization such as A/B testing, multivariate testing, targeting and segmentation, individualized real-time personalization, etc.
To sum up, digital innovation is taking power in this rapidly progressing market which makes it obligatory for firms to strategize their game plan accordingly. Using right tools and steps to personalize your content helps to enhance the user experience globally. It is also playing a vital role for businesses in expanding their B2B relations and even invoking user trust.
Working on a new project? Get in touch with our Drupal experts today for a hassle-free web development.
So far we have gone through different components of AngularJS, such as Data Binding, Modules and Controllers, Scope, Custom Services, and Filters. In this blog, we will discuss custom Directives, followed up by Dependency Injection, Functions and Routing.
Before we proceed to Custom directives, please go through the previous blogs (mentioned above) to have a better understanding of Angular and its components. Not to mention this part of the series requires advanced-level knowledge of AngularJS.
In AngularJS, Directives allow you to extend HTML. The directive allows you to create custom tags in AngularJS. By adding existing or custom Angular Directives, one can get the functionality work in the application. Technically AngularJS tells the browser while compiling, to attach behavior to the element or transform the DOM element. We can also manipulate DOM using jQuery, however, creating custom directives let you reuse the element across the AngularJS application as per your requirement.
Note that AngularJS allows you to Create Controller & Services for an application. Similarly, we can also create Directives in AngularJS.
Most of you have seen the basic directives that we use in everyday application development. They are ng-app, ng-init, ng-model, ng-repeat.
Ng-model binds the value of HTML with angular. Ng-repeat repeats HTML element. Ng-init initializes data for the app. Ng-class dynamically binds CSS classes to HTML & these directives could be a string, object or an array. Ng-app is used to the bootstrap application.
In AngularJS, Directives starts with ng- or data-ng-. And there are various ways we can create directives in AngularJS:
One of the basic way is:
Another way is:
It’s quite similar to HTML where we store custom data that will be stored in a page or an application. You can also validate the same through HTML editor.
Custom Directives:
We can declare custom Directives only after declaring the same as mentioned below:
/
AngularJS provides the naming convention for custom directives so that the name of an attribute will match to custom Directives. There are few guidelines & suggestions from AngularJS while declaring and calling the same in your application.
Try to write Directives name in the lower case.
Suppose your new Directive is ‘MyNewDirective’. To call directives in the View, write my-new-directive, my_new_directive, my:new:directive, you can use (,) comma colon(:) underscore(_) hyphen(-).
Template property for Directives allows you to add the HTML content to the HTML.
Sample code:
In case you want to make HTML content more dynamic based on some business logic then use link keyword to achieve the same. Also known for DOM manipulation, Link function accepts three parameters: Scope, Element, and Attribute.
Also, we have scope property under AngularJS custom Directives. This is all about boundaries/limitations of a directive to use controller object. It provides an ability to isolate parent controller scope property and different ways to use under your custom Directives. Will take it up here using some of the examples.
We can also define Angular Controller inside the Directives and manipulate the scope inside controller function itself.
Custom Directives uses a property called replace that tells, which Directive element needs to be applied. By default, it remains in a disable mode and applied directly to the child element. To enable, we need to add replace: true as one of the directive parameter.
with replace: true
Output:
with replace: false(default)
Output:
Custom Directives uses a keyword ‘restrict’ that allows you to define the variety of HTML element.
E: Application look for matching HTML element and enable HTML tag. Syntax: sample:
A: Application look for matching HTML attribute and enable Directive HTML attribute. Syntax:
Sample:
C: Application look for enabling CSS, when app finds matching CSS class then Directive get replaced. Syntax:
Sample:
M: Directive is used very rarely, when we require to pass markup validations. And activated once AngularJS finds match HTML comment. Syntax: Sample:
We can also include these directive Restrict option under single restrict. Sample: restrict ‘AEC’ or restrict ‘ACE’ or restrict ‘CEA’
All have to reflect the same o/p irrespective of order. By default, restrict takes ‘AE’ as parameter option.
Sample code:
Under Custom Directives declaration, we have one more keyword - ‘template’ - that is used to specify the HTML content to be added in the View.
sample:
return {
restrict: ‘E’,
template: '
{{result}}
' };
So far we have seen definition, using guideline. Let’s see how to create a custom Directive. Remember that creating a custom Directive is quite similar to factory service creation and it returns HTML.
And same could be rendered in view as shown below:
So any directive name written in the uppercase could be pulled to the view by eliminating uppercase with hyphen and lowercase.
Eg.: ‘newCustomerRequest’ should be ‘new-customer-request’
‘newCustomerRequest’ should be ‘new:customer:request’
The above source code will create custom Directive and inject HTML directly in the View.
Source code:
Directive.html
Directive.js
The above source code is used to render data from custom Directives. Some of the Directives like ng-app, ng-controller are also available for app bootstrapping and compile your code functionality based on your app controller in browser. When ng-directives found in your HTML code.
Just to make sure, how data is getting displayed I have used Firebug to show detailed HTML structure to you. By using Firebug, you can find that your data is being printed inside your custom Directive.
Welcome to Custom AngularJS Directives
Directive.js
By now you must have a pretty good idea about creating an Angular module for your application. You also have enough knowledge on Angular controller, how to create them and use across the application.
Creating a custom directive uses similar fashion like factory service creation. In case of factory service, we use the similar structure as above & service returns an object. Similarly, in custom Directive creation, we use the same formula.
The above code should return template having
{{result}}
. Here the result (above source code) is controller object. And having a static value "Welcome to Custom AngularJS Directives".
It’s okay to write small HTML tags inside an AngularJS application for custom Directive templates. However, we shouldn’t follow the same if we have large HTML codebase to be pushed to the Directives. Here we need to differentiate between HTML and JS in an Angular application. Note that it’s not a recommended guideline. How can we achieve the same, I will guide you in next example.
To achieve the above scenario and avoid writing nasty and huge HTML tags inside custom template under Directives, we can use ng-template. This built-in core directive is used to load the content of script into $templateCache.
Follow the below guide to make AngularJS work:
1. In ng-template,
Custom Directive uses template URL instead of template to fetch the data.
Here "my-custom-dir.htm" is the specific ID name. The same name has been taken in directives.html.
return {
templateUrl: "my-custom-dir.htm"
}
Output:
Still not sure how to make these template more modular and increase the usability for other AngularJS application.
Check out how to do that.
1. Create separate file.
2. Move your HTML code to the new file and save that file with the name "my-custom-dir.htm" as we did earlier.
GraphQL is the new frontier in Application Programming Interfaces (APIs) - a query language for your API and a set of server-side runtimes (implemented in various backend languages) for executing queries. Further, it isn't tied to any specific database or storage engine; instead backed by your existing code and data.
If you are a Javascript developer then there are better chances that you have heard of it. But you are not sure about it. To help you out, I have written this blog post so that you can easily figure out what exactly is GraphQL and how to make most of it. When you will complete this GraphQL blog cum tutorial, you will be able to answer:
What is GraphQL
Core ideas of GraphQL & limitations of RESTful APIs
How GraphQL resolves the limitations of RESTful APIs
How GraphQL can be used in Drupal
Let’s get started.
So what is GraphQL?
As I mentioned earlier, GraphQL is a query language for fetching application data in a uniform way. Developed by Facebook in 2012, GraphQL was rolled out in 2015. And for few years, the social media giant used it for internal purpose.
Cutting straight to the chase. GraphQL is a methodology that directly competes with REST (Representational state transfer) APIs, much like REST competed with SOAP at first.
Core Ideas of GraphQL
Client Requests and Server payloads have the same structure.
The server contains the schema.
The client dictates what it wants from the server to provide.
Limitation of RESTful API’s
Multiple Endpoints -- Endpoints are specific to individual views. With a REST approach, you can create multiple endpoints and use HTTP verbs to distinguish read actions (GET) and write actions (POST, PUT, DELETE).
Overfetching -- Response contains more data, mostly unused. For instance, if you hit this URL https://swapi.co/api/people/1/ the response consists of large amount of data like eye-color, gender, links to films. etc.
3. Many Round Trips -- In the previous URL response, you can see films contain a list of URLs. So in order to get the detail of films you need to hit these URLs resulting in multiple round trips to the server.
How GraphQL Resolves The Limitations of RESTful APIs
Single Endpoint -- Single Endpoint can resolve GraphQL queries and send a single, unified response. GraphQL does not use HTTP verbs to determine the request type.
Tailored Response -- Response is catered to the client demand. With GraphQL you explicitly request just the information you need, you don’t “opt out” from the full response default, but it’s mandatory to pick the fields you want. This helps by saving resources on the server since the payload to transfer is smaller.
Fewer Round Trips -- Returns a single response flexible to accommodate many relationships.
GraphQL in Drupal:
Drupal provides a module named graphql that lets you craft and expose a GraphQL schema for Drupal 8. It is built around a PHP port of GraphQL to support the full official GraphQL specification with all its features.
You can use this module as a foundation for building your own schema through custom code or you can use and extend the generated schema using the plugin architecture. Here the provided plugin implementations will form the sub-module.
There are some other modules based on it like:
GraphQL Mutation: - Core module with common fields and types for enabling mutations.
GraphQL JSON: - Extract data from various JSON sources.
The above mutation & JSON modules are in dev version and the views have an alpha release. You can try out these to know more.
Drupal provides an in-browser IDE interface to execute GraphQL queries. You can find that in “graphql/explorer”.
Let’s try our hands on some of the GraphQl Queries & Mutations.
Fields: Write a simple nodeQuery which will return entitylabel & entityid.
The response of the above query will be similar to this.
As you can see the response is similar to the actual query. This is essential to GraphQL because you always get back what you expect, and the server knows exactly what fields the client is asking for.
Query with arguments
In the above query, we are getting the title of a node by passing an argument which is the node id. Response from above query will be
Query with Filters
The above query filters nodes by publish status resulting in the below response.
Aliases:
Aliases let you rename the result of a field to anything you want. You can't directly query for the same field with different arguments. If you do a query like this you will get an error:
The error will be like this: "Fields \"nodeById\" conflict because they have differing arguments." This is where aliases come to rescue. Using it we can give an alias to the two queries as shown below.
Fragments: GraphQL includes reusable units called fragments. Fragments let you construct sets of fields, and then include them in queries where you need to. In the below example, you can see I have created a fragment nodeFields.
The above query will generate a response like this
Variables:
Sometimes we need to pass dynamic values to query. We can pass dynamic values using variables in GraphQL.
When you start working with variables, we need to do three things:
> Replace the static value in the query with $variableName
> Declare $variableName as one of the variables accepted by the query
> Pass variableName: value in the separate, transport-specific (usually JSON) variables dictionary
Mutation
Most discussions of GraphQL focus on data fetching, but any complete data platform needs a way to modify server-side data as well. The module GraphQL Mutation is needed to perform POST operations. https://www.drupal.org/project/graphql_mutation. It adds GraphQL mutations for all content entities.