Featured

Recent posts

Testing 2

Big data and information: Machines are increasingly fitted with different kinds of sensors. This sensor information is at the core of the IoT revolution that we are finding ourselves in. A simple machine can have hundreds of sensors, and the most complex ones have tens of thousands of them. Imagine the volume, the velocity and the variety of data that is sitting in a store coming from these sensors. At this point, big data is what must come immediately to mind. Collecting information from these thousands of sensors is one thing, understanding it is an altogether different one. We are in a big data situation for sure and your big data guy is going to throw loT, Hadoop, and Spark into the problem. But would this lead to some insight? Would this lead to a discovery or a tangible return on investment? Big intelligence  Information must turn to understanding and that is the problem where AI steps in and saves the day. Understanding the information is an appreciable effort specially if we want to make our investments on complex data infrastructure worthwhile. At Nagarro, we recently had a chance to help a client with something similar. We were asked to look at almost a terabyte of data (yes, no exaggeration here!) which was coming out in the form of approximately 500 sensors spewing 500 numbers (of all kinds positive, negative numbers with up to 15 significant digits) once every milli-second. The client also wanted a purely mathematical approach to solve the problem and did not want to use any special knowledge about the sensors in any way to facilitate this understanding.   Analyze incoming streaming values and discover the different states the machine is in at any given time   The art of data sciences   There are different approaches to solving a problem like this and our Deep Learning CoE (Center of Excellence) was thrilled to have a chance to apply ourselves to a problem like this. Multivariate Analysis is really helpful, but we were hoping to find deeper insights from deep neural networks. Neural networks are successful in solving many problems in the domain of machine vision, speech, natural-language processing, etc. However, neural network architectures tend to be complex and take a lot of serious effort to train them to discover underlying patterns. This is what our Deep Learning CoE does for a living. Learn how to build neural nets and train them on various kinds of tasks. We have trained neural nets to recognize 1000 different objects in images and identify different disease stages in a scan. These neural nets can also understand human emotions and sentiments. Multivariate Analysis is really helpful, but we were hoping to find deeper insights from deep neural networks. Neural networks are successful in solving many problems in the domain of machine vision, speech, natural-language processing, etc. Multivariate Analysis is really helpful, but we were hoping to find deeper insights from deep neural networks. Neural networks are successful in solving many problems in the domain of machine vision, speech, natural-language processing, etc. Multivariate Analysis is really helpful, but we were hoping to find deeper insights from deep neural networks. Neural networks are successful in solving many problems in the domain of machine vision, speech, natural-language processing, etc. Multivariate Analysis is really helpful, but we were hoping to find deeper insights from deep neural networks. Neural networks are successful in solving many problems in the domain of machine vision, speech, natural-language processing, etc. Step 1: The first step was to try and understand what type of values each of these 500 sensors were representing. It involved digging deeper into each of the sensor signals independently. The technical term for this is univariate analysis. Interestingly, if done properly, univariate analysis can reveal a lot of what’s going on specially in the cases where the signals are representing a relatively simple machine. However, in our specific case, we had 500 sensors and data signals from different sensors were overlapping in non-linear ways leading to extremely complex states. Sensor signals have a shape and a form.   Every signal has a shape that explains the nature of the signal   Science of machine learning   Step 2: Now it was time for looking at these sensor signals in clusters or groups. Signals move with each other in different ways. Exploring their movement together is the topic of multivariate analysis. Multivariate Analysis offers a range of techniques to model the available data as different distributions. Based on these distributions data can be segregated into different clusters or groups. We use this technique to isolate normal data from abnormal data. It is a more complex analysis that exists and requires us to use techniques like Gaussian mixture analysis, dimensional reduction, anomaly detection, novelty detection algorithms, etc. Here the goal was to discover a group of important signals and use it to predict the state of the machine. We applied these techniques and found interesting group structures in these signals. As the machine state was revealed, we were now able to see deeper into the data. Deep neural networks   Step 3: Multivariate Analysis is really helpful, but we were hoping to find deeper insights from deep neural networks. Neural networks are successful in solving many problems in the domain of machine vision, speech, natural-language processing, etc. However, neural network architectures tend to be complex and take a lot of serious effort to train them to discover underlying patterns. This is what our Deep Learning CoE does for a living. Learn how to build neural nets and train them on various kinds of tasks. We have trained neural nets to recognize 1000 different objects in images and identify different disease stages in a scan. These neural nets can also understand human emotions and sentiments.   Dense and convolutional neural networks to reveal spatial structural patterns   An interesting property about neural networks is that they discover the important things to pay attention to in the data, by themselves. They do not have to be taught which are the important signals and which are not. That is a big deal. Especially since this is the horse-work we do in univariate and multivariate machine learning. We must figure out using our mathematical intuitions what signals to attend to and which ones to ignore. A neural network surprises us by making this discovery by itself. And it is amazing to see the discoveries it makes each time.   Recurrent neural networks   We build some very complex and very deep neural nets – with tens of layers of neurons. We used recurrent neural nets in some very interesting combinations to help reveal the structural and the temporal pattern that was sitting deep inside the terabytes of data. We finally revealed the essential pattern in the data and it looked beautiful. Beautiful patterns   The output is beautiful, and it is completely discovered by the neural networks that we built. The heartbeat of the machine that tells us lot of things that are going on with the machine. It tells us when the machine is idling when it is working slowly, when it is working fast, when it is almost on the edge of a precipice, when it falls over, when it is taking a maintenance break and so on. By looking at the patterns, we can tell whether it is a Monday, Tuesday or a Friday. We can do this because the machine follows the regime of the assembly floor and so the patterns begin to reflect everything. All kinds of structural and temporal patterns suddenly reveal themselves. The mathematics works, and the machine state is now extremely predictable from the data that the machine sensors are jettisoning out. Our solution is a deep learning model that is learning continuously as it is predicting and has a life cycle that is dynamic and deployable. Deep pattern discoveries can lead to building platforms for serious automation and predictive analytics. Deep patterns can be the basis for many interesting tooling, messaging, predictive and maintenance platforms. Deep learning is here to stay and will lead to interesting breakthroughs in the coming years. We are very excited to have the opportunity to nurture deep learning in Nagarro. For more detailed discussion on our solution above, feel free to reach us at info@nagarro.com. 

testing blog

With 5G technology, improved hardware at reduced prices and convenient storage options, the “things” in the IoT continuum are getting smarter each day. In fact, it's predicted that approximately 5.6 billion IoT devices owned by enterprises and government bodies will utilize edge computing for data collection and processing in 2020. Major players in device manufacturing, network operations, cloud computing, and end-user experience are proactively investing largely in this space. Let's take a quick look at what is edge computing, why it is needed, and how it can be leveraged in your organization today.   What is edge computing and why we need it?  The Cloud-only approach in an IoT solution pushes all the data to the Cloud through a fat pipe. This is a centralized approach that passes on the sensor data through several layers and tiers of processing units before finally reporting the issues in incoming data to the end user, who then typically takes action over it. This happens over a data center that is at least hundreds of kilometers away from sensors with a subsequent delay in response to things. Dumb sensors and smart cloud solutions may work well in some scenarios, but in limited connectivity, this solution becomes troublesome as the sensors send not only the reading (number) data but also images and videos for real-time processing. In these scenarios, the loss of connectivity leads to down time or loss of data in a typical IoT solution. For changing this type of processing, we must make devices smarter. A device that can ingest, process, and act on the collected data with decision-making, storage, and resilient networking capabilities embedded is called an autonomous thing. It is the shift towards autonomous thingswhich targets to shorten the lengthy process to achieve intelligence by bringing the capability to/near devices. Figure 1: Edge Computing Such a solution avoids a fat pipe for transmission, reduces the bandwidth cost which can be very useful in remote areas. Moreover, it requires zero or minimum down time of critical functionalities on intermittent cloud connectivity, thereby reducing the dependency on the Cloud for critical intelligence. Cloud data centers in such solutions tend to be responsible for a subsequent sink of relevant aggregated data, which can be further used for training and learning purposes. This intelligence is again synced continuously with things making them capable of real-time decisions. Due to an ever-decreasing cost of hardware, such devices are not a distant reality but a real commodity today. Providing applications and solutions for such autonomous devices to enhance the capability of the IoT continuum through the Cloud is called edge computing or cloud-assisted Internet of Things or Cloud-of-Things. Edge computing can help in realizing solutions that were not readily available for general scenarios such as real-time computing, smart devices and more.  Computing capability near devices can help increase robustness and quick decision-making in critical conditions. Here are some benefits of edge computing that can be leveraged by any organization:   For Business  1. On-boarding old and new remote assets for the digital journey. 2. Reusing of the existing machines/devices for deriving intelligence from the raw data. 3. Critical decision-making in real time. 4. Governing and controlling data and assets remotely. 5. Ensuring security and compliance (for on-premise assets automatically). 6. Enabling low footprint on cloud services leading to cost saving.   For Technology  1. Adhering to industry protocols such as CAN bus, Modbus or OPC for connecting assets inside the edge location.   Figure 2: Aspects of edge computing 2. Utilizing standardized technologies such as Docker to adapt to different platform architectures such as ARM, x64, and x86. It also helps in code re-usability and easy deployment. 3. Providing built-in resilience and fault tolerance mechanism for processing. 4. Enabling intelligent data aggregation to efficiently use the bandwidth especially in remote locations such as oil rigs or dams. 5. Utilizing local computation for providing machine learning and analytics capabilities, if available. 6. Enabling large-scale IoT deployment at a global level as well as providing Over The Air (OTA) updates for runtime and computing upgrades.      Conclusion  Embarking on an edge computing journey demands dealing with various aspects ranging from hardware suppliers who design the chipsets to the end-user experience providers who show the relevant information to the end users on dashboards and other mediums. This is again a matrix which changes based on the requirements to be executed in the edge environment. Organizations must strategize to develop edge computing capabilities using cloud enablers which have their respective edge flavors such as AWS Greengrass, Azure IoT Edge, and IBM Edge Analytics. Moreover, there are emerging open source micro-kernel style runtimes like Apache Edgent, EdgeX Foudry and Liota which should also be targeted to provide different edge computing flavors as per the requirements. Edge computing requires considerable effort to put on device security, deployment, multiple device integration, data orchestration, maintenance and health monitoring, offline storage and data sync, failure and resilience strategies to be production ready. These considerations help in fulfilling the core business objectives of organizations in achieving true IoT enablement.

Test Event 2

Test Event

Top 10 factors to build high-performing agile teams

Energy & Utilities Every transformational leader would prefer a team Nagarro that is engaged, motivated, and contribute directly towards achieving the organizational goals. Here are the 10 key factors which really worked for me in building high performing agile teams. 1. Communicate the big picture A team works better when it knows how the assigned task is aligned with the organizational strategy and when it contributes towards the external business goals. Picking up user stories from the backlog without analyzing how it adds to the organization’s vision and goals is not effective. It creates neither a culture of creativity nor an environment which promotes challenging things constructively, what is being done and how it’s being done. Successful organizations invest adequate time in communicating a common understanding of overall business objective throughout the organization. The departments and teams are given the freedom to define their goals in alignment with the organization’s objectives. When the team defines goals for itself, there is a greater connect and hence a far greater chance of your team meeting those goals. 2. Constant team composition Very often, teams are formed by grouping available people towards a project objective. When the project completes, the team members are released for other business initiatives. In a new team, it takes time to establish positive team dynamics like, Forming, Storming, Norming, and Performing; and for individuals in the team to understand strength of the other team members and their different ways of working. It also takes time to understand how an individual’s skills complement the skills of another in the team. With time, the team bonding, dynamics, and the ways of working get better. People are more productive when they enjoy working with their peers. If a team composition changes, the teams start all over again. If your environment allows it, you may want to try the concept of allowing people to choose/form their own agile teams: “self-organizing teams.” This is a significant, but one-time disruption which gives the team members an opportunity to choose different team members they want to work with. 3. Bring work to the team Keeping teams idle is a big demotivator. Once the team structure is finalized, it is important to ensure that there is a regular flow of prioritized work for the team. Instead of the team looking for work, the team should be assigned work before any situation like this arises. Avoid unnecessary pressure or constant change in priorities during the sprint as much as possible. 4. Smaller the better An ideal team size is 5 to 7 people. Small teams are nimbler, and they can communicate & collaborate better. If you have 10 or more people in your team, consider dividing the team into smaller groups. 5. Identify meaningful metrics for the team Often, we see too many or too few metrics for the team. Sometimes, we enforce metrics which do not align with the value expected from the team. Despite focusing on measuring the metrics int he first-go, the initial efforts should be around defining the metrics that help in measuring and boosting the overall performance. The metrics must be meaningful to the team and crafted in a way that the top management can consume it to measure real progress across the organization. Use information radiators in the team area to amplify these metrices. Many teams generate large number of metrics without making any active use of these. Instead of having dozens of metrics, establish a few “relevant metrics” around factors such as business value, quality, lead time, etc. The team should be able to understand and consume the metrics on a regular basis and learn from them. 6. Recognize team effort A high-performing team makes sure that each team member should perform his or her best. This culture can be nurtured by measuring the performance of a team and rewarding the best performing team. The measurement criteria should include parameters on business value, quality, and timeliness. Some organizations go so far as to restructure their incentives to focus on team performance over individual performance, which fosters a culture of team over self. 7. Empower the team to take decisions The team should be empowered enough to take their own decisions. They should be presented with the work to be done and allowed to find their own solutions and ways of working. The scrum master or product owner should not overly control or direct the activities of the team. Read our recent perspective on common product onwer anti-patterns for more clarity.  8. Establish strong engineering practices For software teams, it’s important to build the cadence of continuous integration and continuous deployment. Automated code reviews, automated tests, automated builds, and deployments help in a big way in discovering problems early and increasing confidence of the team and the business. 9. Make time for fun and innovation Teams should factor 20-30% of the time for having fun and for innovation. This not only helps keep the morale high, but also ensure people have a fresh mind and that they are able to contribute in innovating for the organization. 10. Swarming Getting the team to work together on one problem at a time helps them understand each other better. The individuals learn the unique strengths in other members and can collaborate better when stuck with specific problems.   Do you have any experiences with setting up high performing teams? Do you have any questions? Are you looking for the right guidance and focused training for you and your team. We would love to help. Please reach us at info@nagarro.com

WeAreDevelopers Conference

1