Featured

Recent posts

Role of CIOs: Innovate in an ever evolving world

Is there a change in the role of CIOs today who do not have to handle routine operational issues and can devote time to the development of the company, especially from an IT perspective? Damianos Soumelids, MD, Nagarro Austria, believes that a broader understanding of the role is required from a consultant’s perspective.

The Journey to Digital: Re-imagine your customer digital maturity model

In the first part of this series on "The journey to digital" we discussed the need to develop a relevant business transformation driven digital strategy based on several underlying environmental, behavioral and, technological factors. What we realized in the process is that every organization has its own unique digital frontieri and hence their transformation journey must be unique.

Going Digital? Start with Design First

How Nagarro developed the first ever native iOS/Android app in the US railroad industry.

Hitchhiker's guide to business transformation: Tips for innovation design in IT

Fast progress calls for fast solutions, especially from IT. Small miracles are supposed to be achieved with shrinking budgets, traditional tools, and classical approaches—miracles for which, without a unique visionary design image, one waits in vain. Business models need to consider departments that will be responsible for their implementation over the long haul, namely IT. Nagarro makes the best use of various innovative technologies and brings its comprehensive industry, technology and project experience into play. It has created a framework that supports CxOs in their innovation and transformation process. Here are some tips that support innovation design: Set up an innovation platform In everyday life, there is little room for looking at things from new perspectives. Customer journeys, interdisciplinary co-creation, generation of ideas, "out-of-the-box" working phases, and prototyping have been used for many years in various fields of innovation design. In the UK, the proof of an innovation design process is even a pre-requisite for government tenders. This is done to put innovation on the agenda by creating an interdisciplinary ecosystem. Get business units on board The input from business units is valuable in several respects—you gain knowledgeable resources, but the invitation to co-create also ensures support during the implementation. Together with other assignments, the steps of a process such as evaluation, design task definition, or idea creation can also take place. The innovation task, which IT derives from the above mentioned steps, is well-founded and pursues business goals. Fail forward Transformation does not take place on a flipchart. Developing many good ideas means that not all of them will be implemented or will work out. Therefore, it is important to define pilots and prototypes and to plan missions within a reasonable framework. In an agile innovation process, it is also important to integrate new findings on a continual basis. Innovation, but grounded in reality Innovative design is not an end in itself but a superordinate target image that serves as the preparation for concrete projects. Starting from the status quo, the conversion phases must be defined and realistically clocked. For this, it is always important to take industry-specific technology trends, current market developments, and project synergies into account.  

Listen up! You can bet on these technology concepts

Earlier, all CIOs had was Cloud. Now they are implementing virtual reality concepts. It is not easy to keep up with global technological developments and innovations. Which technology concept should companies bet on? Should they think short and medium term?

Data warehouse & analytics: Amazon Redshift or Google BigQuery?

In today’s world, data analytics is playing a vital role in the success of almost all businesses. To analyze your big data sets, you may end up spending a lot on complex hardware and software. Cloud offerings around data warehouse and analytics can relieve you from these pains with cloud service providers offering tools that make it easy to store your big data sets. These tools also allow you to perform advanced analytics on the data sets. Google offers BigQuery whereas Amazon offers RedShift as cloud-based enterprise data warehouse options for large-scale data analytics. Both of these services take real-time data analytics for bigger businesses to new heights. Google BigQuery and Amazon Redshift are both cloud services and process data in real time. The biggest advantage of using them is that they do not require any initial investment on hardware and software. However, the decision to choose one out of these cloud services can be cumbersome. The following factors play a crucial role in deciding which cloud offering is more suitable for your business: Fully managed In case of Amazon Redshift, you need to define the type of servers as well as the number of server instances you want to use in the cluster. This means you need to have a fair understanding of hardware limits to envision the scale-up and scale-out. On the other hand, Google BigQuery is a server-less data warehouse and does not require you to define any infrastructure. For example, you don’t have to think about how many server instances you need or what should be the configuration of these instances (such as CPUs, RAM, etc.). Budget For Amazon Redshift, you need to pay on an hourly basis for each server you provision, even when the servers are idle. Compute charges will additionally be applicable for the use of server instances. When you use Google BigQuery, you’re charged for the queried data rather than the number of server instances used to process the queries. However, it is important to note that Google BigQuery does not use database indexes and your query might scan the entire database. So the cost depends upon how many queries you are executing and how much data is getting scanned by these queries, making it difficult to estimate the cost at the end of the month. Also, costs can fluctuate based on query patterns. Speed The speed or performance of query execution largely depends on the number of CPUs. When you use Amazon Redshift, you get the number of CPUs you are paying for. To keep the performance high, you need to define how to distribute data among the server instances. Redshift allows data indexing, so you can define indexes for the queries you want to run fast. All this may require a database administrator who can look into these database tasks. On the other hand, Google BigQuery automatically and intelligently brings in the number of CPUs to run your query as quickly as possible (in seconds). There is no concept of indexing in BigQuery; each query runs fast. This is managed internally by Google by forming a massively parallel distributed tree for pushing down a query to the tree followed by aggregating results from the leaves at a fast speed. Besides this, storing data in columnar storage helps in achieving high compression ratio and scanning throughout. The three factors above help us conclude that BigQuery scores over Amazon Redshift in almost all of these instances. Amazon Redshift is cost-effective and gives you the opportunity to analyze and optimize queries; Google BigQuery scores higher in terms of simplicity. BigQuery keeps you away from underlying hardware, database and any kind of configuration. It eliminates the need to understand the complexities involved in data indexing, vacuuming (for periodic maintenance), distributing data among servers, server sizing, and maintaining scale-up and scale-down. Apart from scanning a lesser volume of data, BigQuery also eliminates the need to think about the infrastructure and allows you to fully focus on analyzing the data. For more information on our offerings in cloud space, you can reach us at info@nagarro.com.

Can chatbots transform the banking & financial industry?

Gartner predicts that by the year 2020 customers will manage 85% of their relationships with an enterprise without interacting with a human.[i] The banking industry is at the forefront of this radical change in customer interaction. Third-party services like Slack, Facebook, Twitter, WhatsApp, etc. are the new interfaces actively bridging the casual communication gap that existed between the banks and its customers. And keeping this trend alive are the AI-powered chatbots that are empowering the banking and financial industry by simplifying the entire process of customer interaction and enabling improved customer acquisition, engagement, and retention.

A 5-step approach to successful mobile testing strategy

Consumer and enterprise mobile applications have grown exponentially over the last few years. In 2016, the global mobile internet user penetration exceeded half the world’s populationi. In fact, as of March 2017, Android users were able to choose between 2.8 million apps. Apple's App Store remained the second-largest app store with 2.2 million appsii. With the user base multiplying at a fast pace, it is essential to ensure performance and functionality of the application. A comprehensive mobile testing strategy therefore acts as a key differentiator. The scope of mobile testing is vast due to numerous software platforms and versions, a variety of hardware, screens resolutions, and diversity in mobile devices. In addition, frequent updates in the operating systems, new mobile devices in the market, and increasing consumer expectation of quick software upgrades means that prompt and efficient test cycles are required. Covering this diverse scope with an inefficient mobile testing strategy can be expensive and time consuming. At Nagarro, we have developed a mobile testing strategy that revolves around automation, maximizing coverage and at the same time optimizing the cost of mobile testing. Step 1: Device sampling The first step of our mobile automation strategy is to identify the set of real devices, platforms, simulators where the automation framework will run. We call it device sampling. The aim is to establish the right balance between simulators and real devices to optimize cost of testing without impacting quality. Simulators are effectively used for testing basic application functionality and flow while testing on real devices covers touch response and user experience. A stable mobile application (after simulator testing) is then tested on real devices. Step 2: Choosing the right automation tool After device sampling, we select the right mobile automation tool. The choice of tool is based on different factors such as cost, type of mobile platform, script reusability, ease-of-use, browser support capabilities and supported languages. To optimize the automation cost, we look for open source tools first and move to commercial tools only when we find some constraints with the open source tools. Some of the tools we use include Appium, Renorax, XCUI test, etc. for Android and iOS automation. Step 3: Selecting the automation framework After finalizing the right tool, we focus on the type of automation framework (data-driven, keyword-driven, hybrid) for the mobile application. Selection of the automation framework is based on our framework selection checklist which includes criteria such as size of the application, type of business scenarios etc. Step 4: Minimizing automation risk for future Once the framework is identified, to minimize automation risk in the future (basically, to make it future proof), a small PoC (Proof of Concept) of 2-3 weeks is done to automate critical business workflows. This helps in identifying any major issues that might be encountered during future automation. The PoC also helps automation engineers in accurate estimation of the volume of automation. Step 5: Monitoring After completing the estimations and PoC, automation scripting, execution, debugging and reporting starts and the KPIs (such as coverage, detected defects, etc) are closely monitored. The automation framework is also integrated with build pipeline (Continuous Integration) to ensure automated smoke, sanity and regression as soon as the build is deployed in the test environment for testing. It is vital that automation remains relevant and effective over a period as the mobile application grows. Thus, close monitoring and maintenance of automation framework is essential. Mobile application testing is challenging.  However, with an optimal selection of simulators and real devices, we can maximize test coverage without the need to test every feature on each device. Nagarro’s mobile automation framework supports a variety of platforms and is integrated with build and deployment pipeline for continuous automated testing and regular quality validation. Read more about key considerations to keep in mind while formulating an effective automation testing strategy.

Can we go 'digital' without DevOps?

An organization ready to undertake a digital journey often fails to keep newly implemented systems robust and flexible enough to adapt to new technologies. Despite countless DevOps solutions and setting up a close collaborative culture between development and operations, in a majority of cases, we forget to bring continuity to the process with regular feedback. DevOps can get us on the right track to achieve the vision of digitization, however to transform legacy code into a more distinct business model as well as build a successful agile delivery framework, we need Continuous DevOps.

Why vote for Google Cloud Platform?

As more and more businesses decide that they don't want to manage infrastructure and IT operations by themselves, the ‘Cloud First’ approach is becoming a norm rather than a choice. With the leading players in this domain being Amazon Web Services (AWS) and Azure, Google is often criticized for the lack of institutional will to create a robust cloud platform that enterprises will be enticed to adopt. However, this outlook is changing now. Google is not just investing heavily in their cloud platform, but also bringing the enterprise mindset and ecosystem into the Google Cloud Platform (GCP). A few services that make the Google Cloud Platform (GCP) unique are: BigQuery A fully managed, petabyte scale and low-cost enterprise data warehouse for analytics. Simply speaking, BigQuery is the public interface for one of Google’s most successful internal tools called “Dremel.” It can scan millions of rows without an index in a matter of seconds and is an excellent alternative to Hive on Hadoop or Spark SQL. However, limited integration capability with some of the industry standard business intelligence tools and the lack of support for user-defined precision alternatives are some of BigQuery’s major shortcomings. Of course, like any data warehouse, BigQuery cannot predict the amount of data that could be stored and processed in the future. Our solution experts at Nagarro however, generally try to solve this problem with calculation tools to optimize usage and predict costs relying on long-term storage, query optimization or usage, data expiration policies, partitioning, and sharding. Cloud Dataflow A managed service for developing and executing a wide range of data processing patterns using a unified programming model. Big data applications often face challenges such as lack of resource capacity, a wide range of data processing patterns, and managing multiple services for various data processing pipelines. This leads to both performance and operational issues. Dataflow provides a simplified and developer-friendly way to approach this issue. While the Cloud Dataflow runner service abstracts the application from low level details, the Dataflow SDK allows developers to create data processing jobs and focus on logical composition rather than physical orchestration. The designing of pipelines is important as flaws in such designs can lead to a huge increase in reads and writes, which will eventually lead to performance degradation. At Nagarro, we recently recommended Cloud Dataflow together with BigQuery to a customer where requirements were centered around the IoT space – with more real-time event processing and stream processing with real-time data. Cloud Dataproc A fully managed Hadoop and Spark offering. When the application is tightly coupled with some big data tools like Hadoop and Spark, Dataproc provides an easy alternative by making the entire process easy to start, run and manage. Dataproc also abstracts the application from hardware and software installations. Ease of use, together with quick creation and scaling capabilities, make Cloud Dataproc ideal for many business scenarios. Although a built-in support for on-demand clusters is not available, these clusters can be managed by using the software development kit (SDK) and REST API provided by Google Cloud. Also, a cluster is acquired as part of job creation in Google, unlike other MapReduce solutions. Factors like high granularity of billing (per-minute), separation of storage and compute, and affordability of services in comparison with competitors make Dataproc and its service offerings unique. Machine Learning & Artificial Intelligence One of the biggest challenges faced by the industry in the machine learning domain is training larger models. This often requires expensive special purpose hardware. With lack of technical capability to build and train custom models, it becomes the biggest hindrance in the usage of machine learning (ML) and artificial intelligence (AI) by enterprises. Google solves this problem by providing pre-trained models along with a huge list of supported APIs such as Google Cloud Vision, Google Translate, Cloud Speech, and Cloud Natural Language. Moreover, Google Translate API and Google Prediction API are simplified services that make learning and adoption easy. Many customers we have worked with at Nagarro are also able to make the best use of Googles AI/ML services based on TensorFlow for predictive analytics and deep learning. Infrastructure as a Service (IaaS) While the world is talking about “DevOps”, Google is already realizing the “NoOps” dream. Google says that the instance boot time of Google Compute Engine (GCE) is considerably better than most of its competitors. It also claims to be the first platform to deliver its cloud infrastructure on Skylake (Intel’s latest processor). Another interesting feature of GCE Load Balancers is that they don’t need any pre-warming like AWS. GCE also brings in the concept of “pay per use” based on sub-hour billing. Here, the compute engine instances are charged in one-minute increments (with a ten-minute minimum chunk). Thus, users don't pay for compute minutes that are not consumed by their application. Our observation is that Google has emerged as a strong and serious cloud player and has reinforced the belief that AI is the future of cloud. At Nagarro, we help our clients choose a computing platform based on an exhaustive checklist of functional and non-functional requirements. We also provide solutions keeping in mind the ever-changing market trends. For more information on our offerings in this area, you can reach us at info@nagarro.com.