Flutter - Fast way to develop iOS and Android apps from a single codebase

Flutter is an open-source application SDK that allows you to build cross-platform (iOS and Android) apps with one programming language and codebase. Since flutter is an SDK it provides tools to compile your code to native machine code.

It is a Framework/Widget Library that gives Re-usable UI building blocks(widgets), utility functions and packages. 

It uses Dart programming language (Developed by Google), focused on front-end user interface development. Dart is an object-oriented and strongly typed programming language and syntax of the dart is a mixture of Java, Javascript, swift and c/c++/c#. 

Why do we need flutter?

You only have to learn or work with one programming language that is Dart, therefore, you will have a single codebase for both iOS and Android application. Since you don't have to build the same interface for iOS and android application, it saves you time.

  • Flutter gives you an experience of native look and feel of mobile applications.
  • It also allows you to develop games and add animations and 2D effects.
  • And the app development will be fast as it allows hot reloading. 

Development Platforms:

To develop a flutter application you will require Flutter SDK just like you need Android SDK to develop android application.

The IDEs you will need to develop flutter application are:

Android Studio: It is needed to run the emulator and Android SDK.

VS Code:  VS code is the editor that you can use to write Dart code. (This is not required when we can write dart code in android studio or Xcode).

Xcode:  Xcode is needed to run the iOS emulator.

Steps to install flutter in Linux:

Install Flutter(Linux)

To install flutter in the system follow the official doc.

Now here are some steps to install and running your first hello world android app with flutter:

After you are done with flutter installation from official docs, just open your Terminal and write

flutter doctor

you must see something like this:

Flutter doctor

Now to create flutter application write below command in you preferred directory(please use Latin letters, digits and underscores in the name of the app otherwise you may face some errors)

flutter create hello_world_app

Now you should see the folder structure of the app like this:

Structure | Valuebound

Your application will be in hello_world_app/lib/main.dart

Point to be noted you will write most  or maybe all of your code in the lib directory

Now you can replace main.dart file’s code with the code given below.


import 'package:flutter/material.dart';

void main() =>
    runApp(MyApp()); // main function is the entry point of the application

class MyApp extends StatelessWidget {
  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      home: Scaffold(
        appBar: AppBar(
          title: Text('HELLO WORLD'),
        ),
        body: Material(
          child: Center(
            child: Text('HELLO WORLD!'),
          ),
        ),
      ),
    );
  }
}

In flutter almost everything is a widget, flutter is full of widgets, it provides a widget for everything like Buttons, input fields, tables, dialogues tab bars and card views and list goes on.

Here in the first line, we have material.dart library imported, it is a rich set of material widgets that are implemented by material design


void main() => runApp(MyApp());

The main function is the very entry point of the application which call the runApp function and that takes MyApp widget and parameter

class MyApp extends StatelessWidget {
  @override
  Widget build(BuildContext context) {...}
}

This is a widget that you will use to build your app, it can either be stateful or stateless.

Stateful widget means which has mutable state and this kind of widget must have the createState() method.

Stateless widget means which does not have an internal state, like some image or some text field, it must have the build() method

Our app does not have a stateful widget as we don't have to change any state of the app

So the internal part is like this
MaterialApp() ⇒ a material design widgets wrapper, 
Material() ⇒  Creates a piece of material
Scaffold() ⇒  Creates a visual scaffold for material design widgets.
AppBar() ⇒  is to create a material design app bar for the app
Center() ⇒  creates a widget to center its child widget
Text() ⇒  is a Text widget 


To run this flutter application:

You will need android or iOS emulator or physical devices connected to run this app

You can you below given commands to run the app

flutter run ==> it will run the app on the connected device or emulator
flutter run -d DEVICE-ID ==> will run on a specific device or emulator
flutter run -d all ==>  will run on all connected devices 

after this, you will see a screen something like this

Hello World |Valuebound

Voila, just now we have build our first application using Flutter. This should be good starting point to develop database driven applications. I have built new application, treLo - road side assistance platform, using Flutter. We released this within one week of time. Would love to hear your feedback and kind of ideas you are working on using Flutter.

Drupal Contribution Hour at Valuebound: 2019

Drupal is gaining adoption in enterprises. Drupal 9, which will be released in June 2020, should speed up the adoption even further. Drupal 8.6 released last year September included major improvements in the layout system, new media features, and better migration support. 

The biggest hindrance to the growth of Drupal is the availability of quality developers. With more and more enterprise companies adopting Drupal for various business application including CMS, intranet, extranet & commerce, there is an increased need for experienced Drupal site builders, developers, and themers. To continue the growth momentum, as a community we need to work towards building a bigger talent pool of good Drupal developers. We need to introduce Drupal to more and more people early in their career. Each and every coder becomes the master in his field by experience, developing technical skills, solving difficult problems, being aggressive to learn and teach others. 

Drupal also provides a unique perspective to build a personal brand for every developer - that also by doing what they are comfortable - coding.  

According to the developers, being an open-source contributor is a key selling point in your professional skills. One of the popular open-source CMS is Drupal. Drupal is a growing platform with 1.37 million members and 114,000 members are actively contributing.

We at Valuebound have taken the advice of Kristen Senz, Researcher at Harvard Business School, by heart. - “Companies that contribute and give back learn how to better use the open-source software in their own environment.” and Organized Drupal Contribution Hour on 25th May, 22nd June, and 18th July along with a host of wannabe Drupal devs as well as mentors where we introduced, how everyone can contribute. We helped them to select issues, setup git and creating patches as well as committing the code. 

Bravo! We touched 40+ issues in the last few months and the team submitted patches to the below-mentioned issues. Few of them has been accepted by the maintainer of the issue. 

https://www.drupal.org/project/field_permissions/issues/3042752
https://www.drupal.org/project/header_and_footer_scripts/issues/3050967
https://www.drupal.org/project/login_redirect_to_front/issues/3065756
https://www.drupal.org/project/address/issues/2995992
https://www.drupal.org/project/perfmon/issues/3065862
https://www.drupal.org/project/shorten/issues/3065879
https://www.drupal.org/project/roleassign/issues/3065871
https://www.drupal.org/project/node_title_validation/issues/3065839
https://www.drupal.org/project/perfmon/issues/3065862
https://www.drupal.org/project/entity_clone/issues/3068549
https://www.drupal.org/project/perfmon/issues/3068669
https://www.drupal.org/project/site_settings/issues/3067951
https://www.drupal.org/project/phone_registration/issues/3071779
https://www.drupal.org/project/entity_reference_layout/issues/3071702
https://www.drupal.org/project/paragraphs/issues/2901390
https://www.drupal.org/project/field_permissions/issues/3042752


It’s not the 4 hours game, many of them have been working after that too and will be asking us queries. If you are a Drupal Developer reading this, drop a comment to gain access to the issues list, 

Ohh I forgot to mention, I am new to Drupal and have started feeling the love of the Drupal community, experiencing the true joy of making my minor contribution to this world by organizing this event. We ended up the day with the resolve to continue the tradition every 3rd Thursday of the month. Till then continue working on patches team have picked up which require more time.

Build your CI/CD pipeline with AWS Elasticbeanstack, Codepipeline and Cloudformation

Building an Immutable Infrastructure is the ultimate goal of this solution. Reusability of code for creating a similar environment in a short duration of time and more developer-friendly is another aspect of this solution. AWS Cloudformation is the orchestrator for provisioning and maintaining the infrastructure through infrastructure as code. The entire infrastructure can be created by executing a single template. It will create a nested stack with all dependent resources. The life cycle of each component of a stack can be managed by updating parent stack. It will detect the changes from all nested templates and execute the changesets.

Cloudformation, VPC, EC2, ELB, S3, Autoscaling, AWS Elastic Beanstalk, Code Commit, AWS CodePipeline, SNS, IAM are using here for implementing this solution. AWS Cloudformation is the core component of the infrastructure which maintains the state of all components. Our network infrastructure leverages VPC and its components for building a secure network on top of AWS. A single VPC spans across all availability zones of a region with different subnets to ensure the servers are distributed across availability zones for building a highly available and fault-tolerant infrastructure. We have a different subnet for different tiers of a web application.

Architecture Diagram | Digital Experience

Our application is designed in a two-tier architecture pattern. Application logic is implemented in an EC2 server managed by AWS Elastic Beanstalk and Data tier is implemented in RDS. Both tiers are scalable. For infrastructure administration and maintenance, a Bastion host is deployed in a public subnet. It is a highly secured and created from a prebuilt AMI provided by AWS. It will allow ssh connection only from a trusted IP source. Application servers and Database servers are hosted in private subnets. It can be only accessed from the Bastion host. Servers can be connected only by key pair authentication to avoid vulnerabilities. App server can access the internet through NAT gateway for software installation.

Classic Elastic load balancer is a user-facing component to accept the web requests in our application. This traffic is routed to the backend EC2 servers. Backend server takes care of processing the web request and return the response to ELB which is then consumed by the end-user. ELB is deployed in a public subnet and it is secured by a VPC security group which will allow only http/https inbound traffic from external sources. ELB will only access the back end servers either by http/https protocol. To ensure high availability and uniform distribution of traffic, we have enabled cross-zone load balancing. Apart from that, we have configured the load balancer to support session persistence, maintaining idle timeout between the load balancer and the client.

We use RDS Aurora database as a database tier for the application. It is deployed as a cluster with read/write endpoints. Both servers and database instances are secured by strong security group policy to avoid access from an untrusted network source.

AWS Code commit is a source code repository for this project, It is a highly available, private repository managed by AWS. S3 bucket is used for storing the artifusted network source.

  • AWS Code commit is a source code repository for used network source.
  • AWS Code commit is a source code repository for acts. This artifact is used by AWS codepipeline to deploy it on different environment.

CI/CD pipeline is the core component of this project which builds the code and deploy the changes to the server. We use AWS Codepipeline for building a CI/CD pipeline.

How to create the infrastructure?

Our infrastructure is created and managed by AWS Cloudformation. Before executing the template, please follow the below instructions to create an infrastructure.

Pre-requisites:

  1. CodeCommit Repository with the source code of the application
  2. SNS topic with email subscribers
  3. S3 bucket containing AWS cloudformation templates, Create a folder called "templates" inside a bucket and upload the cloudformation templates into that folder.

Steps:

  1. Log in to the AWS Management Console and select CloudFormation in the Services menu.
  2. Click Create Stack. This is the only option if you have a currently running stack.
  3. Enter the required input parameters and execute the stack. The order of execution of stack is given below. Cloudformation template parses the inputs and resource section of a parent stack. Initially, it will create a network stack for the infrastructure. It includes VPC, Subnet, Routetable, Nacl, Internet Gateway, NAT Gateway, Routing policy. A bastion host is created with an appropriate security group policy. Elastic Beanstalk application will be created for deploying different environments such as dev, staging, and production. Aurora Database cluster will be created in the next step for dev, staging and production environment. DB server has its own security group to control the inbound access.

    It has its own parameter group as well as the config group. Elastic beanstalk application environment will be created for different environments. Here, our runtime is PHP and we have created a configuration group with the required parameters such as load balancer configuration, EC2 auto-scaling configuration, environment variables for application environments. Continuous Integration and Delivery pipe will be created in the last step. It uses code commit as the source and applies the changes to the elastic beanstalk environment whenever there is a change in the source code with manual approval in staging and production environment. Our template will create a required IAM role for the code pipeline project. 
  4. After a few minutes, the stack is available and we can access the services. Initially, Codepipeline releases the changes to the instances hosted in the elastic beanstalk environment.
  5. Access the environment UI and check the application.
  6. Update some changes in the source code, CI job will be triggered within a minute, It will pull the source code from the code commit repo and waiting for a manual approval in staging and prod env to apply the changes to the server, Elastic beanstalk will create new resources and the code is deployed in the environment. Then it will remove the old resource after the successful deployment. This action continues whenever the new version is committed to the repo.

CI/CD pipeline for deploying a PHP application hosted in Elastic beanstalk environment:

Continuous integration (CI) is a software development practice where developers regularly merge their code changes into a central repository, after which automated builds and tests are run. The key goals of CI are to find and address bugs more quickly, improve software quality, and reduce the time it takes to validate and release new software updates. In our case, we have built a CI pipeline using AWS Code Commit and Code Pipeline. It has two-three stages.

Stage1: Source

When the pipeline is triggered by a change in the configured repo branch, the very first step is to download the source code from the server to the workspace where the next set of actions to be done. Here, we have configured the pipeline to pull the specified repository name and branch. If a single-stage has multiple actions, then we can mention run order to execute a particular action in some sequences.

Stage2: Approve

Some projects might not require a build and we can move to the next stage. In our case, it is the approval stage. The project manager can approve the changes to be deployed in the environment or deny the changes. We use SNS for sending a notification to the subscribers to approve the changes. If the action is approved, the pipe will move to the next stage otherwise it will be aborted.

 

 

Stage3: Deploy

Depending upon the approval, the pipeline may or may not reach the deploy stage. During the deploy stage, the code is deployed in all the application environments. Elastic beanstalk deployment strategy high endorses Blue-Green deployment pattern. During deployment, users can access the application with an older version. No changes will be done in the existing servers. Beanstalk creates a new set of resources and applies the changes to the server. After successful deployment, the latest version of the application can be accessed by the users and the old servers are removed.

The basic challenges of implementing CI include more frequent commits to the common codebase, maintaining a single source code repository, automating builds, and automating testing. Additional challenges include testing in similar environments to production, providing visibility of the process to the team, and allowing developers to easily obtain any version of the application.

Continuous delivery (CD) is a software development practice where code changes are automatically built, tested, and prepared for production release. Continuous delivery can be fully automated with a workflow process or partially automated with manual steps at critical points.

With continuous deployment, revisions are deployed to a production environment automatically without explicit approval from a developer, making the entire software release process automated.

Source code is available here[https://github.com/rkkrishnaa/cicd-elasticbeanstack]

Drupal and Artificial Intelligence for Personalization

We humans as a species want to create a future, where AI will be acting in every aspect of our lives. AI is getting more capable with cognitive abilities similar to humans and these are getting enhanced on a daily basis, thus solving many challenges which were not possible earlier.

When it comes to enterprise open source CMS, “Drupal” is the first name that comes to any one’s mind. Drupal is making inroads into enterprise CMS in faster pace than what was anticipated. Having said that, Drupal is growing at a larger scale and Drupal 9 will be releasing in June 2020. In parallel Artificial Intelligence is one of the technologies that is creating waves across. Combination of AI and CMS technologies is an area that has a lot of potential and is a great way to deliver the benefits of AI to a larger set of people. Drupal + AI will bring a lot of value addition to any organization, 

  • Be it in the form of “Deriving Insights”
  • Web personalization
  • Or a combination of the above two.

On this day of May 19th of Sunday 2019, along with in association of “Valuebound”, hosted a webinar series on “Drupal and AI for Personalization”. Keeping in mind the growing popularity of Drupal and AI together, this event was planned and delivered. Many of coders/contributors may have knowledge of CMS but may have hindrance when Artificial Intelligence is exposed to them. Also, there may be a set of folks who are totally new to both AI and CMS (Drupal). To address all of these types of folks, coders, contributors, this webinar was planned and presented.

During the first session, Me, along with Gokul from ValueBound tried to capture the essence and need of Drupal and AI. Then we spoke about the few essential steps to kickstarting our Drupal + AI journey.

To have the community continue to contribute when Drupal and AI are required, this presentation started with an introduction to the History of AI, followed by definition and various insights explained. I have also introduced Machine Learning and Deep Learning as part of this presentation. In this, I have covered from the basics of neural networks to deep neural networks with explanation and examples.

Note:-Following is the link for the presentation, https://www.slideshare.net/valuebound/drupal-and-artificial-intelligenc…

Drupal provides a variety of features and has edge over other CMS solutions available in the market ranging from the digital experience. Following are the key pointers that were discussed in elaboration.

  • Digital Experience
  • Global community and collaboration
  • Documentation and Web 
  • Innovation and Globalism

By making a bold statement as "AI can emulate human performance by learning from it",     I have started a discussion about Drupal and why now is the time to combine and explore AI/ML + Drupal both. During the session, the following pointers were addressed by meeting the expectations of the audience from all walks of life,

  • Drupal's chat-bot API
  • Web Personalization and recommendation
  • Support of multilingual platform
  • Deriving insights from the content
  • Meeting dynamic business needs

During this, I have covered various AI possibilities around Drupal CMS with some industry insights. Then touched upon a few real-time AI based use-cases, based on industry sector and domains. 

Questions included in this are:

1. What is the enterprise level market share of Drupal in the current scenario?

Drupal is the only player having significant market share in open source CMS for enterprises. Drupal almost covers-up 24-30% of enterprise CMS market share as compared to other competitors.

2. How Drupal handles multilingual capabilities and support?

Drupal supports multilingual capabilities by exposing various API’s to work on translation, Locale, Content translation and internationalization. Starting from Drupal-7, support of various languages has taken-up on priority and addressed with API exposure.

3. How AI will change the future with other technologies?

By automating and including cognitive capabilities into smart web-apps, AI will thrive in providing 

  • Better user experience
  • Better targeting 
  • Better value for money

We would love to hear your feedback. Do add your comments below as well as any questions you might have.

Why Aligning Content To Product Marketing Is Important For Manufacturers

The modern-day economy is symbolized by quite a few factors; such as increasing market saturation, and the presence of cut-throat competition in nearly every niche and industry.

Hence, the need to stand out has led brands to create and publish more content than ever – leading to an overload of information for consumers.

Particularly for manufacturers, the relentless stream of content has made it harder for brands to cut through the noise, and gain the consumers’ attention.

A previous study revealed that consumers are exposed to 5.3 trillion display ads per year – and the number has only risen ever since.

In addition to display ads, the consumer is flooded with information through different channels that marketers use to position and advertise their products.

How Information Overload Impacts Manufacturing Companies

The human brain has evolved to process information, and retain knowledge in a certain way. The speed of technological change is yet to impact the rate at which humans consume information.

In such a data-driven atmosphere, information overload has had a negative impact on businesses around the world.

Stressed Workforce

We are creating far more data than we can put into use – and that’s a well-known fact.

According to Forrester, 60 to 73 percent of all data in an organization goes unused for analytics. Despite the fact that more companies are talking about big data, using technology to gather data, and acknowledging the value of this information – they are unable to get the most out of this data.

However, this has not lowered the expectations of employees; losing a critical piece of information due to the presence of so much data, such as a product description, can affect the entire organization adversely. Coupled with decreasing response times, the information overload has hampered our ability to complete tasks.

For example, research has found that 25% of workers experience significant stress and poor health due to the volume of content that they’re required to process.

If this wasn’t enough, a similar study was conducted across the world where participants from the US, England, Singapore, and Australia described the impact of content overload.

The study reported that a whopping 73 percent of managers stated that their job required them to process a lot of data, and the resulting information overload affected their stress levels.

Overall, it’s quite clear that there is a need for organizations to streamline their stages of content creation and distribution.

One Solution To Your Content Problems: A Content Management System (CMS)

A content management system is one of the best investments that a business can make to solidify their digital presence.

Apart from ensuring great content that works, businesses need to prioritize proper content management to attract their target audience and keep their employees stress-free.

Here are some benefits of using a content management system.

Allows Multiple Users

In a product marketing organization, different people are assigned responsibility for each stage of the content strategy: this includes content creation, publication, deriving insights, and keeping a check on content quality.

Without a proper channel to log in and record your session, it is a continuous struggle for administrators to keep a check on the input provided by content managers.

A CMS not only allows multiple users to access the platform at the same time but also keeps a record of everything that occurs for future reference.

Streamlines Scheduling

Be it product pages, additions to your site or new blogs – a CMS allows you to review updates from your content department in one glance.

Scheduling and continuous check-and-balance of the overall content strategy are the most important tasks for a product manager. Devoid of a CMS, this task becomes much complex.

As product marketing continues to become more integrated, with several communication mediums overlapping one another, streamlined scheduling has become more important than ever. Now, modern product managers need to be aware of the status of all projects in real-time – and this is exactly what a good CMS system allows them to do.

Helps You Manage Content

According to IDC, 71 percent of marketers now create more than 10 times the amount of content they did previously.

The rate at which content is being produced also gives rise to another problem – the pace at which the content is rendered obsolete. As consumers tend to filter through information, they expect to receive data that is currently relevant.

For many product marketing businesses, content management is not just the creation and segregation of different types of digital content – it also includes the ability to remove out of date information.

For example, if you are running a festive promotion for your product (Christmas or Thanksgiving), you need to be prepared to archive the data once the season ends.

Without a CMS in place, this task can take hours’ worth of time as you have to carefully identify and archive all posts about the promotion.

A good CMS has such data grouped in one place, where all menus and links are updated automatically. In other words, the removal of time-sensitive content can be easily done in a few clicks.

You’re The One in Control

To sum it all up, the biggest advantage of a CMS is the absolute control that it lends to product marketing organizations.

Instead of relying on external sources or having a chaotic content feed, a CMS delivers organization, discipline, and uniformity to the process of content creation.

With the right CMS platform, you can update, approve, and deploy content as fast as needed on any scale – without this affecting your performance. In other words, with the help of a CMS, managing content and assets then becomes all about quality, efficiency, and velocity.

The rise of content marketing has been meteoric, to say the least – in fact, modern buyers rely five times more on digital information when making a purchasing decision.

Consider the fact that an average buyer is likely to interact with 10.4 pieces of content before buying a product, and the importance of CMS becomes clear as day.

Gatsby and Drupal : Match made in heaven?

Gatsby is a popular static site generator that can communicate with any backend.

The front-end landscape has exploded in the last three years. Today you have various libraries/front end frameworks like React, Angular, VueJS. You have tightly coupled full stack frameworks NEXT, NUXT etc. Of all these options Gatsby finds a sweet spot with its JAM stack approach. JAMstack is “a modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup.”

In this article we will discuss how we can make use of the JAM stack using Gatsby and Drupal. We will also cover the general questions that are generally not answered in the blogs.

  1. Gatsby is static site generator, so what are the scenarios where I can use Gatsby.
  2. My site has a lot of dynamic content, how do I make it work well with Gatsby.
  3. I have read that Gatsby is based on React, so how can I decide when to use Gatsby and when to use React only front end.
  4. Can I use gatsby to progressive decouple my Drupal website?
  5. How can I configure Gatsby so that most of the configurations are read from the backend and a parity is maintained with current existing website.
  6. What is server side rendering and how can I leverage it with my Gatsby and Drupal set up.
  7. Is Gatsby good only for the anonymous user or can I leverage it for authenticated flows as well?
  8. Setting up OAuth on Drupal and leveraging the same for Gatsby.
  9. What is the relationship between jsonapi and graphql.
  10. Setting up Gatsby
  11. What are the other alternatives that play well with Drupal.
  12. Should I use Gatsby or Tome?
  13. Feedback

There are many todo articles that already explain how to get started with Drupal and Gatsby. I will just link to them here instead of repeating and answer the questions that I had when following them. If you have already made your decision just visit the Setting up Gatsby section.

What is JAM stack

JAMstack is “a modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup.” In a crude analogy you can think of it like the static files deployment that you were doing if you were in software development say a decade ago. This is not entirely true but JAM stack tries to combine the ease of static sites along with the dynamicness provided by APIs.

Javascript : Javascript is used for the client side interactions, better handling of dynamic rendering of elements on the client side and also for interaction with the backend if any. Think AJAX.

APIS : Any interaction with the backend is abstracted into reusable APIs. One more advantage is that in the front end you can make use of your own APIs as well as any third party APIs.

Markup : You will using the Markup or HTML/css for the front end. In JAM stack markup is generally prebuilt at the deploy time.

Gatsby is static site generator, so what are the scenarios where I can use Gatsby?

Gatsby is a static site generator. What this means is that the public folder that is created during the build time will function as a static website. So you can take that folder and deploy it on any server with Apache or nginx like server to serve the requests. For example if you use the default starter kit of Gatsby communicating with Drupal backend, you only need the backend to be up during the build time. Once your build is done, you can down your Drupal server and just deploy the public folder generated during the build. What this means is that your Gatsby site will communicate with your Drupal backend during the build time, fetch all the details necessary and create static files for all the paths that will be present on your site.

My site has a lot of dynamic content, how do I make it work well with Gatsby.

Think of Gatsby as a state-machine. The state consists of the code changes you make and also the data you save in your database. So whenever there is a change in the code or the data in the database, you need to rebuild it for the latest state. For code changes it is easier to trigger the build. For the changes in database it is a little bit difficult. But you can make use of web hooks to trigger these builds. But the question is how frequently do you trigger your builds? Let us say that you can stay with some stale data, then you can have builds every few hours. This might be suitable for blogs where if the content is refreshed every couple of hours/days it should be sufficient. But if your site has lot of user interactions then frequent rebuilds might be a pain and it will also add load on the server. In such scenarios it would make sense to keep the content that changes less frequently in a backend like Drupal and for the content that changes frequently(like comments) it might be better to off load to a third party service(like Disqus for comments).

I have read that Gatsby is based on React, so how can I decide when to use Gatsby and when to use React only front end.

If you would like to have control of all your data(both that changes less frequently and more frequently) then it might make sense to do the Gatsby build for content that changes less frequently, then add React components that interact with live API calls for content that changes more frequently. Since Gatsby is built on React, you can stick to your standard API calls in componenentDidMount and then render the results in render function.

Can I use gatsby to progressive decouple my Drupal website?

If you are not sure whether to use Gatsby for the whole of your site, then this approach can be good. For example let us say that you have a website that has many functionalities including blog, you can just transfer the client facing blog functionality of your website to Gatsby. In your apache/nginx config just make sure that only example.com/blog is handled by your Gatsby front end and all other parts are handled by Drupal.

How can I configure Gatsby so that most of the configurations are read from the backend and parity is maintained with the current existing website?

One simple approach I took was to make sure that I have Drupal backend at backend.example.com and front end at gatsby.example.com

In my gatsby-node.js I added a rule like

result.data.allNodePage.edges.forEach(({ node }) => {
                    createPage({
                        path: node.path.alias,
                        component: staticPageTemplate,
                        context: {
                            slug: node.path.alias,
                        },
                    })
                })

This makes sure that even in the gatsby front end I can use the path set in the backend. So for example backend.example.com/blog/firstblog to gatsby.example.com/blog/firstblog This way it was easier for me to map and check the missing pages from the sitemap.xml as well.

What is server-side rendering and how can I leverage it with my Gatsby and Drupal set up.

Gatsby comes with a default SSR settings. What it means is that API calls to your data sources are made at the build time. So all the pages are server side rendered. Once you are done with a build you can pretty much shut down your backend unless you have any real time API calls that you need to do from your Gatsby app.

Is Gatsby good only for the anonymous user or can I leverage it for authenticated flows as well?

By default Gatsby plays well for read only kind of websites. But there is nothing that stops you from creating forms and authenticated user flows as it internally used React. Connecting Gatsby with Redux might be good for authenticated flows. Checkout https://medium.freecodecamp.org/how-to-get-started-with-gatsby-2-and-redux-ae1c543571ca for Gatsby with Reduct integration. If you want to add authentication to your Gatsby sites checkout https://www.gatsbyjs.org/blog/2019-03-21-add-auth0-to-gatsby-livestream/#authentication-in-gatsby

What is the relationship between jsonapi and graphql.

In Drupal jsonapi is a popular module. There is also a graphql module. But if you are using https://www.gatsbyjs.org/packages/gatsby-source-drupal/ plugin you will observe that on the Drupal end you just enable the jsonapi module thats all. What is happening is that the the GatsbySourceDrupal plugin is taking care of the jsonapi output and converting it to graphql compliant structure so that you can actually use graphql queries to access your data in Drupal. You are generally asked to enable the jsonapi access to all user. Be careful as you might be exposing some confidential data that is not intended. Do check these out before deploying in live.

What are the other alternatives that play well with Drupal?

Gatsby is a great tool. It makes starting with REACT and Drupal so easy. It is also highly configurable and customizable. But it can still be overwhelming for Drupal developers who have not worked with Frontend frameworks earlier. Thanks to open source we have other alternatives.

Tome is another interesting project that generates static sites for Drupal 8 quickly and without needing to know the new frontend libraries and tools.

In https://twitter.com/DrupalSAM’s own words

It’s important for me to make Tome more accessible to less technical users. Anyone that can make a Drupal site should be able to make a static site without learning a completely different toolset and programming language.

I think this is a great goal and will help the Drupal community immensely. Once people realize the advantage of JAM architecture I am sure most of the blog like sites will default to static sites :)

If you don’t trust me, just check out the video in below tweet to realize how easy it is to create static sites with Tome.

 

https://twitter.com/DrupalSAM/status/1091813197214928896

 

Should I use Gatsby or Tome?

As with most answers to the technical questions, that answer is “It depends” :P

But here is the thumb rule I use.

  1. If you are not comfortable with Frontend and you just want to use JAM stack or static sites for the performance sake Tome is a great choice. There is not much addition learning curve.
  2. If you want to leverage the fast pace of development in REACT eco-system but don’t want to build things from scratch, you can leverage Gatsby.

Setting up Gatsby

Before setting up Gatsby I would recommend going through the following two videos. They give you a high level insights and should be sufficient to get you started quickly.

1. https://www.youtube.com/watch?v=PKMTLyIpbvQ

2. https://www.youtube.com/watch?v=vMrv5toXwjc

3. https://www.youtube.com/watch?v=vjBi3Rt-Xas

Understanding Gatsby Internals for Drupal

If you would like to understand the details of how Gatsby works and how to customise it for Drupal, you can check out this blog by Ryan Bateman

Tutorial: GatsbyJS for Drupalers; or, How to JAMStack-ify your Drupal Site with GatsbyJS
GatsbyJS is everywhere these days. The upstart react-based site generator has come a long way from its humble…www.ryanbateman.space

Things I wish I knew in Gatsby before getting started

  1. Gatsby also uses the term node. This is different from the node used in Drupal.
  2. Any atomic data that Gatsby handles can be considered as node. So in Gatsby lingo Drupal nodes, categories, taxonomy terms and users can also be considered as nodes.
  3. There is no rule that each Gatsby node needs to have a page. You can decide for which nodes you need to create pages.

Feedback

I hope this answers some of your questions before shifting to Decoupled Drupal site or deciding on the front end frameworks for your Drupal API backend. I have collected all these highlights and notes about Drupal/Gatsy using our extension. Once you do that, you can search all your notes from our dashboard.

If you have any questions that I am not able to answer just let me know about them in comments. We can explore them together and share our notes for mutual benefit. Let us make this as a repository for all the commonly asked Questions by Drupallers regarding headless option.

 

Why I am finally switching from chrome to Brave

TLDR : Brave is fast, secure and a cryptocurrency driven (for marketing and fixing the ad model of web) browser. It is trying to fix the internet as we know today, by improving the ad model. Having known about it couple of months ago, I had been postponing my switch to Brave for lack of extensions support. Now they have started supporting chrome extensions and hence I am switching.

The Long Version

The thing with tools is that once you choose one and get used to it, it is really difficult to replace them. This is true even if the new alternative is much better than the old one. With the rapid pace of development today there is a tough competition and a close fight most of the times. With nightly builds and weekly releases, it is too taxing to compare the strengths of tools objectively on a regular basis. When was the last time you even considered an alternative browser. Do you compare the latest features of the popular browser every month? at-least once in every quarter? How about every six months? Well yes. Not many people do. I didn’t either. It is very rare that we go back to drawing room to review and decide what tools to use on a regular basis. It is too much of work. So we make the decision and stick to it until we are forced to change. We know that is not ideal but that is how we are all wired I guess. I am saying all this because inspite of all these I am switching to Brave and you might go through the same when you consider it ;)

Who is behind Brave

I first heard about this project almost an year ago during its ICO. I was curious about this project as it was co-founded by the Brendan Eich. He is best known as the creator of the ubiquitous Javascript programming language. He was also a co-founder of Mozilla, the organisation behind the open-source web browser Firefox. In addition to Brendan Eich the team has some great names with each of them being a pro in one critical feature of the project. Go checkout https://basicattentiontoken.org/about/ in details once you are done reading this article and you will know what I mean.

Screenshot from official website

With such a great line up, people would definitely take note when the team speaks. In fact they have they have been making use of this quite effectively.

Features of the browser

If I had to put down the features of this browser in the order I like them the most it would be.

  1. Browse Faster
  2. Protect your Privacy — Blocks trackers.
  3. Httpseverywhere.
  4. TOR integration.
  5. Block Ads by default.
  6. Fixing the Ad model — Pay your favourite publishers.
  7. Marketing
  8. Chrome extensions support.

Browse Faster

Their website claims that the browser is upto 2 times faster on desktop and upto 8 times faster on mobile. I have not “run my horses” yet, but even during the normal usage I could see that the browser was way faster compared to other browsers. The browser start time is also very less. Personally this is critical for me. Couple of years ago though I liked everything else about Firefox I didn’t move to it, as it used to take very long for a cold start.

Protect your Privacy — Blocks trackers

I have been using this browser for only couple of minutes now and I can already see that it has blocked 129 trackers. I was really blind to the fact that there were so many trackers tracking me on every popular site I visit. I am a little more confident now that my personal data and patterns will not be sold to marketers without my consent. Brave did a good job of keeping track of these numbers and showing them visually on the browser. This will definitely raise the awareness.

https everywhere

This is more of a misnomer. I thought this would automatically convert all http sites to https sites. That is create a secure channel on the go. How foolish of me :P If the certificate is not installed on the server and website has not enabled https, I don’t think there is much the browser can do. But looks like I am not the only as I read couple of discussions regarding this where people assumed that “https upgrades” it will create secure channels. But looks it only means redirections when https version is available.

https upgrades means, sites and hyperlinks, that are linked to http:// and if a equivalent https:// site is available… brave browser, automatically changes those links to https:// equivalents, to prevent tracking etc.,

I think brave should rephrase this as it can be misleading. If you have not yet decided to use brave you can use the httpseverywhere extension. After having realised what “httpseverywhere” it is no more a killer feature I thought it was. But I am leaving it here in this position, as many people might misunderstand it like I did.

TOR Integration

Though I have read couple of times about TOR I have never really found the need to need to use them. While that is one of the reasons, the other is that I am lazy. But looks like Brave is making using TOR very easy.

When you open a new private tab brave shows this message. Now you can start using TOR with a private browser. Isn’t that cool?

Message on private tab

You can start a new private tab with TOR, by doing a right click on the new tab icon in any one of the existing tabs.

Opening a private tab with TOR

The documentation is not clear about how to open a “New Private Tab with Tor”. I think Brave should change the message on the first screen so that people have clear instructions about using this feature.

Blocks ads by default

I have never used ad blockers until recently. Having managed a couple of small websites in the past, I realise the issues of the publishers. If most users start using “ad blockers” then the publishers will have to find alternative sources of income. Since there were none, I never felt using Ad blockers. But now Brave is coming out with a very good solution for this problem, which we will discuss later.

However while going through the brave download page I came across a new information that was surprising. We never realize that we might be paying substantial amount to ISPs for downloading ads and trackers which we don’t want. So I was naturally surprised to find out the actual figures. Brave claims that it is $276 for an year.

The average mobile browser user pays as much as $23 a month in data charges to download ads and trackers — that’s $276 a year. Brave blocks ads and trackers, so you don’t pay for them.

Now that brave has an alternative revenue model I think we can safely start using “Disable ads and trackers”.

Timing

You can get everything else right but if your timing is wrong there is nothing much you can do. We all know of many of our favourite projects which got everything right but failed to get mass adoption just because they were either ahead of time or they were late. So timing is very critical.

I think brave is entering the markets in the right time. Following are couple of reasons I think that timing is right.

  1. Facebook sells personal data — http://fortune.com/2018/04/10/facebook-cambridge-analytica-what-happened/
  2. Google might face record fine in Android monopoly case — https://www.expresscomputer.in/news/google-might-face-record-fine-in-android-monopoly-case/25864/
  3. 50 millions Facebook accounts might be compromised — https://www.cnbc.com/2018/10/02/facebooks-muddy-account-breach-response-could-be-the-new-norm.html
  4. Time Berners Lee starts Solid — A web where the users have complete control over their data. https://medium.com/@timberners_lee/one-small-step-for-the-web-87f92217d085
  5. Blockchain and Cryptocurrencies are here to stay (Brave is using them. Read in the next sections below) — https://www.finpipe.com/why-blockchain-is-here-to-stay-2/

Revenue Model

Digital advertising is broken and online publishing is dying a slow death. These are interrelated and one cannot be fixed without fixing the other. In the words of founders of Brave “It is a market filled with middlemen and fraudsters, hurting users, publishers and advertisers.”

The Basic Attention Token (BAT) was developed to address this. BAT, an ERC20token built on top of Ethereum, will be the token of utility in a new, decentralized, open source and efficient blockchain-based digital advertising platform.
In the ecosystem, advertisers will give publishers BATs based on the measured attention of users. Users will also receive some BATs for participating. They can donate them back to publishers or use them on the platform.
This transparent system keeps user data private while delivering fewer but more relevant ads. Publishers experience less fraud while increasing their percentage of rewards. And advertisers get better reporting and performance.

The following revenue split seems to be fair enough and looks it is taking everybody’s interest into consideration.

The best part is that Brave is not forcing anything upon the users. Based on your preference you can choose any of the paths below.

It is heartening to see that Brendan and team are not going for token liquidation are privately funded. This shows their commitment in the product for the long term. Not everybody can have this luxury. But it definitely boosts confidence to know that they are thinking long term. Browsers will take time for adoption and having deep  Marketing

Best viewed with Chrome or Firefox

We have all see the following message on various websites couple of years ago. Developers liked these browsers as it was easier to code front end of websites for these browsers. Naturally most of the websites started promoting these browsers as well.

But their association with the browsers pretty much ended there. With Brave the scenario changes. Since the publishers can get paid for their efforts and content, they have a vested interest to promote the browser. Whether the revenues from this source will be sufficient to replace the revenues from ads is something that we need to wait and watch. But it can already be considered as an alternative source of income.

Marketing Budget

While there are examples of open source projects like Linux, Apache etc gaining popularity even without much of marketing, marketing is critical especially for a browser to gain traction. Looks like Brave it not shying here. They are spending real dollars on customer acquisition.

5 USD Custom Acquisition Cost

For every active user converted, I think Brave is paying close to 5USD. Not many publishers might depend on this. But for the new model to gain adoption and challenge the big wigs like chrome and Firefox, the number of users using this browser should definitely sky rocket. When the product is good, don’t underestimate the power of referrals. We all are aware of how Paypal referrals snowballed and made it a house hold name in a very short span of time. Don’t be surprise if the same happens to Brave browser.

I just added the following banner on my website bakkt.com/blog/company/introducing-bakktom/blog/company/introducing-bakkt

You will start seeing images like below very soon on many publications going forward. Get used to it. Online publishing industry have been dying a slow death. Everybody is looking for a way out, but nobody is able to figure out a model that is a win-win for the reader and publisher. If Brave can do that I am sure that most of the publications will flock to this model. But again it will all come down to crossing the critical mass. It is good that Brave is privately funded and they seem to be ready to burn money until they cross 15 Million Monthly Active Users.

Positioning

Brave is trying to position itself as the privacy centric browser. Google’s chrome which was the David against the Goliath(Internet Explorer) has now become the new Goliath and Brave is positioning itself as the new David. Recently Brave filed a privacy complaint against Google. This could be huge if Google lands up on the wrong side of the verdict. It will not only help Brave garner the much required publicity but it will also help cement its position as privacy centric browser and representative of the common man against the monopoly of Google which could play well for Brave over the coming years.

Support for chrome extension

Brave Unveils Development Plans for Upcoming 1.0 Browser Release, Including Transition to Chromium…
The Brave 1.0 browser for desktop operating systems is coming later this year and it will include several significant…brave.com

Regarding moving to chromium base the blog says

Which we believe will allow us to focus more fully on Brave features and less on Chromium upgrades and basic browser work.

I think this is a brilliant strategy. Google is doing a great job of making chrome fast and cutting edge. The problem with chrome has been with respect to the overreaching control it will have once we start using their browser. So this approach of Brave is a master stroke I would say.

If you want to give the latest version a try you can checkout the brave dev version.

Download Brave Dev | Brave Browser
Brave Dev is our unpolished and unfinished early preview for new versions of Brave. These releases show our work in…brave.com

Using both brave and brave dev on chrome

Since the dev version uses the chromium it also means that the chrome extensions are now supported. I installed a couple of extensions to check them out and they are functioning without any issues. This was a major hold back and now I have moved to Brave.

I am still using chrome for the dev tasks but have moved to brave for general browsing.

The big Picture

Basic Attention Token team is playing in the field of Attention economy and Digital advertising. They have realised the browser is a critical component for this and hence are concentration most of their resources on the browser. This seems like a very interesting approach to solve the Digital advertising. They are in it for the long run and they have got their priorities right. So for now I am bullish on BAT and brave. If you want to know more about BAT do read What is Basic Attention Token article

If you are already using Brave do let me know how has your experience been. Also what do you think of BAT? For official updates you can checkout Basic Attention Token

5 ways to breach-proof your Drupal Platform

With each passing day, the threat of security breaches to public facing digital platforms is only increasing. Nothing is safe, be it the corporate websites, a SaaS application or an e-commerce platform. However, much of the risk is minimized if the underlying platforms are fundamentally more secure. Drupal is one of the robust WCMS platforms that is built grounds up with Security and Performance in mind. It not only deploys the best industry practices when it comes to security, but it also has the most responsive community that rigorously and continuously performs security tests and rapidly provides patches and security measures to respond to vulnerabilities.  

Valuebound has been among the Top 10 Global Contributors to the Drupal ecosystem and has a dedicated Drupal Security Center which continuously monitors the security aspect of Drupal and develops solutions to help the enterprise mitigate their security risks and build compliances around their business continuity process. While it is always advisable to have an expert who works round the clock to manage the Security of your Drupal installations, here are few handy suggestions to make any WCMS deployment more impregnable - 

1. Upgrade, Upgrade, and Upgrade

From our experience and expertise in working with large and small Enterprise,  we always recommend running the CMS on the latest version of the platform. Because most often, security breaches happen by exploiting vulnerabilities in codes that have not been patched. The Key reasons to upgrade your platform are -

  • Upgrading avoids unnecessary expenses incurred owing to a security breach
  • Using an outdated version of the platform exposes it to security vulnerabilities
  • An update will fix technical issues & bugs
  • New and enhanced features and functionalities can be added to the platform

Recently, Drupal has released its latest version - Drupal 8.6 - that includes a wide range of new features like Demo Data, Media Library, YouTube & Vimeo Embeds, Layouts, and Workspaces. In order to add new features and functionalities, we are working with several enterprises to upgrade or migrate their content management system. It should be noted that the older CMS versions are usually targeted as they are more vulnerable.

2. Strong User Management - A Must

Very often, a  security breach is an inside job rather than an external hack. Keeping your website safe and sound, therefore, requires strong internal user management. Typically, in an organization, there are various stakeholders who require access to the website in order to manage different areas within it. The security habits of such users can be a  potential risk for a security breach.

We recommend limiting account privileges on the need to have basis full access should be given very judiciously and only when it is absolutely required. We also suggest automated or prompt removal of accounts of users who have left the organization. 

3. Know Your Hosting Provider

There are a bewildering number of choices when it comes to selecting a hosting provider. Of course, some were good, some were bad, some were good and then turn bad. Pantheon, Acquia Cloud, and Apache are some of the established players offering stable and enterprise-grade hosting services. For Drupal installations, it is always recommended to look for a hosting provider that offers security-first Drupal hosting solution with all the server side security measure like SSL. 

4. Encrypt Sensitive Information

We recommend implementing proper certificates that help encrypt sensitive information. Proper deployment of SSL certificates helps protect your users, helps protect you and help you gain customers trust and sell more. Ask your in-house team or Drupal vendor to perform security audits at regular interval as this will allow you to fix the loopholes.

5. Take Backup regularly

Things can go wrong in multiple ways and there is a huge risk of losing all the data in case of security breaches or introduction of critical bugs while making changes or upgrading the platform. Hence it is important to take backup of your platform regularly. There are a host of service providers that offer backup and storage solutions to deal with such eventuality. There are other vendors who provide Backup, storage, and Recovery as managed services. The choice of vendor will depend upon the criticality of your application and the restore points demanded by your business. 

Talk to our Drupal security expert to understand your Security Parameters and help you deploy the right Solutions, Tools, Systems, and Process that is just right for you,

Valuebound is deeply steeped in open source movement and specialize in Drupal CMS strategic consulting, development and dedicated managed support for media & publishing, e-commerce, and high-tech companies.

Implement These Modules to Make Your Drupal Site More Secure

A website with a security hole could be a nightmare for your business, leaving regular users untrusted. The security breach is not just about the website resources, but it could be putting up the website reputation at stake and injecting harmful data in the server & executing them. There could be many ways to do that. One of them is an automated script, which scans your website and looks up for the sensitive part and tries to bypass the web security with injected code.

I believe you might be thinking of your website now.

  • Whether your website is fully secured or not? 
  • How to make sure everything ships on our website is generic? And how to protect them? 

As a Drupal Developer, I’ve come across some of the contributed modules available on Drupal.org that can help your site in dealing with security issues. Still, I can’t assure, by applying those modules, you can safeguard your website. But it’s always recommended to follow the set guideline & utilize the modules to minimize the drupal security breaches. 

Let’s take a look at those top and best Drupal modules:

Secure Pages

We all know that moving an application from HTTP to HTTPS gives an additional layer of security, which can be trusted by the end users. Unlike regular modules, you just don’t need to follow regular module installations instead your server should be SSL enabled.

Currently, it is available for Drupal 7 only.
Ref URL: https://www.Drupal.org/project/securepages

Security Kit

The Kit itself is a collection of multiple vulnerabilities such as Cross-site scripting, Cross-site Request Forgery, Clickjacking, SSL/TLS. With the help of security kit module, we can mitigate the common risk of vulnerabilities. Some of the vulnerabilities have already been taken care by Drupal core like clickjacking introduced in 7.50 version.

Currently, it’s available for both Drupal 7 and Drupal 8.
Ref URL: https://www.Drupal.org/project/seckit

Password Policy

This module is used to enforce users to follow certain rules while setting up the password. A web application with weaker security implementation, allow hackers to guess password easily. That’s the reason you get password policy instruction while setting up the password. It’s not just a fancy password, but secure & difficult to guess.

# Password should include 1 Capital letter
# Password should include 1 Numeric
# Password should include 1 Special Character
# Password should MIn & Max Character

This module is currently available for both Drupal 7 and Drupal 8.
Ref URL: https://www.Drupal.org/project/password_policy

Paranoia

This module looks for places in the user interface, where an end user can misuse the input area and block them. Few features that need to showcase here are:

# Disable permission "use PHP for block visibility".
# Disable creating “use the PHP” filter.
# Disable user #1 editing.
# Prevent risky permissions.
# Disable disabling this module. 

Currently, it’s available for Drupal 7 and Drupal 8.
Ref URL: https://www.Drupal.org/project/paranoia

Flood Control

This module provides an Administrative UI to manage user based on UID & User-IP. There is configuration available to manage user restriction based on the nth number of the wrong hit by user ID/IP. We already know that Drupal core has a shield mechanism to protect their user with five unsuccessful logins hit, users get blocked for an hour/minute. With the help of the contributed module, we can dig it a bit.

Currently, it’s available for Drupal 7.
Ref URL: https://www.Drupal.org/project/flood_control

Automated logout

In terms of user safety, the site administrator can force log out users, if there is no activity from the user end. On top of that, it provides various other configurations like:

# Set timeout based on roles.
# Allow users to log in for a longer period of time.
# User has the ability to set their own time.

Currently, it’s available for Drupal 7 and Drupal 8.
Ref URL: https://www.Drupal.org/project/autologout

Security Review

This module checks for basic mistakes that we do while setting up a Drupal website. Just untar the module & enable it. This will run an automated security check and produce a result. Remember this won’t fix the errors. You need to manually fix them. Let's take a look at some of the security features that need to be tested by the module:

# PHP or Javascript in content
# Avoid information disclosure
# File system permissions/Secure private files/Only safe upload extensions
# Database errors
# Brute-force attack/protecting against XSS
# Protecting against access misconfiguration/phishing attempts.

Currently, it’s available for Drupal 7.
Ref URL: https://www.Drupal.org/project/security_review

Hacked

This tool helps developer avoid adding messy code directly to their contributed module, instead of applying patches or new release update. It works on a very simple logic. It scans all the modules & themes available on your site. Download them and compare it with an existing module to make sure modules/themes are on correct shape. The result will give you information on changed module/theme and the rest of the thing you are well aware of - what needs to be done?

Currently, it’s available for Drupal 7 and Drupal 8.
Ref URL: https://www.Drupal.org/project/hacked
 

All of the above modules are my recommendation that a Drupal website should have. Some contributed module will resolve your security issues by providing correct configuration and some of them are just an informer. They will let you know the issue. But you need to manually fix those issue.
 
Further, these contributed modules provide the atomic security based on the complexity of your site and types of user available. You can look up for the security module and protect your site against anonymous.

We, at Valuebound - a Drupal CMS development company, help enterprises with Drupal migration, Drupal support, third-party integration, performance tuning, managed services, and others. Get in touch with our Drupal experts to find out how you can enhance user experience and increase engagement on your site.

Download the Drupal Guide
Enter your email address to receive the guide.
get in touch