Displaying search results for ""

A Data Readiness Manifesto in the times of COVID

A Data Readiness Manifesto in the times of COVID

In a world distanced socially, it appears our work focus has intensified further. With fewer interruptions and the universality of video-collaboration applications, for millions, the nature of work itself is metamorphosing. And so is doing business.

Across geographies and industries, the mounting importance of going digital was never in doubt. But it is now heightened by our changing behaviors forced by COVID.

The pandemic is on a path to radically alter ways consumers interact with their businesses. For instance, a May 2020 BCG banking industry report points out that one in four customers is planning to use branchless banking or stop visiting branches altogether after the crisis.

Returning to the beginning of this piece; I realized last month that staring at my laptop for long(er) hours is a pain in the neck. Literally. A pain in my neck. After an exhaustive hypothesis-elimination-exercise, the culprit surfaced: My laptop via its various inadequacies was causing the pain (and the watering eyes).

So in earnest began my shopping expedition for a desktop monitor.

In the COVID period, I quickly found out my favorite computer peripherals store(s) are either locked-down or are barely functional. My trusted product advisors were missing. And gone too was a ‘large swathe of decision-making-wealth’: the fact-finding, needs-analysis, product recommendations, and sales discounts.

For the customer in me, all this counted as an experiential loss. After all, a buyer’s trust-bond is created over many happy hours of exchanging insights and trends.

Questions for digital enterprises (including ones that are planning to go online):

  • Is your business recreating a similar customer experience dynamic?
  • Does your business listen and engage with your virtual buyers as well as the brick-mortar world?
  • Sure you are industriously and enthusiastically posting across a host of social media, but are you able to listen to the customer, probe his needs, zero-in on a buyer’s pain points, perceive buying behavior and demographic preferences, layout product nuances, detail the price comparators, and numerous other variables that close sales?

If my virtual purchase adventure described further is anything to go by, I am guessing answers to the above is negative.

Therein lies the first differentiator (and challenge) precipitated by the COVID times: ‘the human-buying-experience’ has to be replicated (and possibly enhanced) digitally.

The neck pain unrelenting, I took my search online, little knowing that a secondary data-avalanche waited ahead.

Tamr, the data mastering company, in its 2020 survey on the state of data and digital transformation, surveyed 300 C-suite executives in the US financial institutions with revenue of $1 Billion or greater. Here are the key findings,

  • Only one percent of those surveyed, are not pursuing digital transformation
  • 47 percent say the main drivers of digital transformation is keeping up to the competition.
  • 63 percent say data management is a drag on their transformation efforts (because data is unreliable, disorganized, or unusable)
  • One in four are dissatisfied with their current methods for managing data velocity, volume, and variety
  • 75 percent can’t keep up with constant changes in data over time
  • 55 percent say non-scalable data management practices are causing choke-by-data.
  • One in two cites ‘speed to insight’ and ‘unifying data across enterprise’ as their primary weakness
  • More than fifty percent say they aren’t utilizing their current data to its full potential because it is siloed throughout the organization.

The derived macro-picture is clear in its summary:

Managing data has notable systemic impediments; Inadequate data readiness brings significant drag to digital transformation efforts and finally, there is a strong need for solutions to data volume, variety and velocity.

I called up a few trusted and knowledgeable friends. Soon after, I started with the organization whose name is now a global ‘verb’: Google.

I feed in the phrases and search. And dozens of websites tuned to the latest SEO algorithms show up on my screen immediately.

I proceed to spend much knowledge filled hours on the sites of global (and national) e-retailers. I try the filters: Brand, display size, display technology, monitor resolution types, horizontal and vertical monitor resolution, energy consumption certifications, and average customer reviews. I compare and contrast, I analyze and assimilate, I deduce and deliberate.

Slowly and surely, I learn more and more about anything remotely connected to a computer monitor.

Next, I turn to various product blogs and their expert recommendations: Nifty pictures, neat summaries of features- advantages- benefits and yet, more information. I wade in deeper – refresh rates, response times, aspect and contrast ratios, panel technology, bezel design, curved or flat screen, built-in webcam and speakers, viewing angle, and factor in variables that are unique to my condition – the monitor stand adjustments (height, tilt, swivel, pivot and their permutations thereof).

Needless to say, all facts and information are calibrated against my budget.

All things considered, am I close to make a decision? Not yet.

I find myself asking:

Saved-by-data or death-by-data?

In the absence of a trusted sales professional who listens, diagnoses my need to offer a slew of recommendations, how does a buyer himself join the dots?

Especially for products that lie outside of his area of expertise.

Questions for digital enterprises (including ones that are planning to go online):

  • Does your data readiness strategy actively engage the virtual buyer (are their feedbacks and observations collated and correlated)?
  • Are your products and services data consistent, well-structured, and in a readable format and aligned to industry aggregators and search engines?
  • Does your data enable real-time integration with other sources for analysis?
  • Can the data be accessed in repeatable, automatic methods?
  • Can the data be tied back to physical sources involved in the production?
  • Does your organization have resources to institutionalize knowledge that comes out of data relationships and real-time models?

As ‘data’ continues to systemically restructure our society, our economy, and our institutions in ways unseen since the industrial revolution two and a half centuries earlier, businesses have to accept that data lies at the structural sweet spot between technology, process, and people.

Before enterprises commit to digital investments, they have to consider the various aspects of data governance, namely, integrity, security, availability, and usability.

And even before that, digital (or soon to be) enterprises have to honestly ‘see’ their data readiness.

Just like a holistically deliberated and a uniquely picked monitor is critical to see your work (and keep your health)

This was originally published on www.emergingpayments.org website and is being reproduced here.

View

‘Carpe Diem’ – Seize the Present for the Financial Sector in the COVID times

‘Carpe Diem’ – Seize the Present for the Financial Sector in the COVID times

Businesses worldwide are scrambling to respond to the COVID challenge. The International Monetary Fund (IMF) predicts the global economy to shrink by over 3 percent in 2020 – the steepest slowdown since the great depression of the 1930’s.

The World Trade Organization (WTO) economists believe the decline will likely exceed the trade slump brought on by the global economic downturn of 2008-09.

As a consequence business leaders need to take rapid decisions on controlling costs and maintaining liquidity while readjusting to the new realities. The need to redefine correct priorities while staying agile and iterating changes on the fly, is what will ultimately decide on who will live to fight another day.

The role of Data, Analytics and AI/ML are taking centre stage during the pandemic, collecting vital COVID 19 data, such as transmissibility, risk factors, incubation period and mortality rate. This data is being used for visualizations, and creating mathematical models that is guiding governments on current disease detection and control strategies.

Current media reports are flush with cases highlighting how companies like IBM, John Hopkins University, Blue Dot, Insilico Medicine, Medical Home Network, Google’s Deep Mind, Covid-net, Esri etc. are all leveraging Data and ML Models to combat the situation.

These efforts though commendable are all burdened by the new challenges that enterprises face today: –

  • Lack of historical data on context since this is a new situation
  • Model training and simulation becomes quite challenging
  • Older models becoming invalid or need course correction
  • Real time responsiveness needed due to the rapidly evolving scenario
  • Existing issues of data quality now exacerbated hence need for more robust solutions.

With this as the context, let’s explore the challenges for businesses especially for the banking sector worldwide. In the Pre-Covid world, the banking sector had already been grappling with how to respond to the digital revolution and how to innovate and be relevant to its customers.

Let’s begin by asking the following questions:

What promise do AI/ML technologies hold in the Post COVID scenario for a bank? And more importantly, what are the necessary data building blocks that organizations need before they can accelerate on the digital transformation journey?

How can the existing data, analytics investments be leveraged better and how to course correct on the data analytics front to respond to these new challenges?

In simple terms,two key ways that banks can combat this crisis is by generating business and recovery scenarios based on forecasts and predictions, and secondly, setting in place tools and frameworks that facilitate early discovery of business trends, that generate alerts and share business-critical information across an organization’s operating landscape.

The assertion of basing business continuity management action on data analytics is all good and well taken – but, exactly how doable is it in the current scenario?

Let us explore this through a few tangible questions that banks, investment companies, insurance companies, real estate firms and the other industry players face presently.

In the COVID times of global uncertainty, do banks know how customers behave when interest rates change? What kind of retail products (savings, wealth, insurance) do such pandemic scenarios promote (or demote)? Does the financial industry understand the impact on the customer and commercial lending sector? Will the banks have definite sales projections, or business volumes for the upcoming festival season and the new year? What about extrapolations and forecasts for credit card spends? Will the insurance industry have customer growth or demand numbers by product lines? Predictably enough, health insurance sector may see growth, but by how much? Will higher infection zones correlate to larger loan risks and loan delinquencies? In light of the exponential growth of digital payments and e-wallets, are there any new business models or profit sharing mechanisms that banks can conceive?

The list of questions can be interminable. But their answers indeed are changing in the new normal. And, in ways we do not understand deeply enough or in quantitative terms.

No matter how you see the COVID scenario(Is the glass half full or half empty?),the fact is that our legacy data structures, databases and data models aren’t capable to provide accurate insights for us to rely or predict the COVID future. However hard we may try we cannot know the current situation’s impact basis last year’s data. This mean that the situation calls for quick analyses on the mountains of different data-sets coming in. Right Now. Let us explore the key areas where urgent action is needed on this front:

Data Sources , Data Currency and Data Quality

Since the past trends are unlikely to help, and the fact that incoming data is likely to be unstructured, unclean, qualitative and anecdotal, financial organizations have to gear up for creating capabilities to analyse real time, broad-based, ad-hoc and granular data. This dynamic kind of exploration will undoubtedly bring its own challenges like deciding – what are the various data types, data sources, data structures that will need to be consolidated?

Adoption of Data Virtualization technologies becomes more important to tap into sources as they exist. For example,joining between an older Datawarehouse and a current CRM database might become very urgent to generate certain critical reports without creating permanent data structures. This can be achieved more easily by Data Virtualization tools.

In terms of Data Currency and Latency, already many banks are moving to real time or near real time Data Ingestion platforms based on the likes of Apache Kafka or AWS Kinesis. Leveraging new methods of detecting Data quality issues like anomaly detection using ML is becoming very critical given that traditional methods are time consuming. Also Self correction of data issues will become centre stage as we move forward.

Data Science, Analytics and the promised land

Next, let us examine a possibility brought on by the situation with an example. Say, there is a rural consumer lending company specializing in loans for buying farm equipment. And there come in a slew of government changes that mandate how rural loans are to be structured, or say announce changes to the moratorium conditions, or perhaps reduce the interest rates. Now for the same company, all such changes have suddenly opened a completely new market. Undeniably, the company will need to quickly respond for the new loan sizes, new loan types, new risk profiles, new collaterals and the like.

Another scenario could be the consumers’ response to the health scare, is seen in the shrinking of the two wheeler loan market and a simultaneous rise in the small car loan segment. Along with the automotive industry, how can banks analyse and validate this hypothesis?

In reality,Data Science and Machine Learning tools promise was to make all the above a seamless experience. However the reality is that most of the data science work is still not enterprise grade and is evolving at a slower pace than envisaged in terms of adoption.

Banks would need to urgently look at the following aspects of data modelling, model management, model deployment and maintenance far more seriously. The elements of model drift and the validity in terms of dimensions and algorithms may need to be reassessed as well. For deployment, it is important to test and deploy on a larger scale for more users by leveraging the real power of distributed architectures like Apache Spark.

There are various tools in the market today that assist in streamlining this process, the key ones being Data IKU, Data Robot among others. If implemented well, these along with the right team of data scientists and engineers can reorient the existing investments in Data science and ML to readjust to the new requirements much faster.

Investments in Data and Analytics Technology – Getting bang for the buck

There is another matter of considerable importance: what if the COVID situation is a temporary one lasting for 12 – 18 months (let’s say, it turns out to be a temporary normal as opposed to a new normal). If any investments are made in data storage or processing or new analytics tools, will these data technology investments turn out to be use-and-throw?

In fact, for this dynamic-analysis-environment to churn out insights for newer decision making models will need fresh thinking around the existing tools, newer frameworks, and flexible hardware investments. Pre-COVID, most financial institutions relied ‘on premise’ models either on enterprise data platforms or Open Source Big Data platforms on the Hadoop ecosystem.

Banks had not adopted cloud platforms for data storage mainly on account of regulatory and Data privacy norms. But given the present maturity levels of cloud solutions, it presents a cost effective and a time opportune strategy for financial institutions to move to the cloud with its native data analysis, and data visualization solutions.

As mentioned earlier, the rate at which data is churned and analysis carried out today will need to be exponentially ramped up or down. The Cloud vendors have come out with excellent distributed processing engines comparable or better than the Open Source options whether it is for ETL/ELT or just processing complex ML jobs, Query Engines, In Memory processing and large scale data lake platforms. However given the costs of the cloud, it is important for banks to evaluate a mix of Open Source and Cloud native tools with cloud infrastructures as the back end.

The goal here is not a complete replacement of the existing investments in the On Premise data infrastructures , but to emphasize that this is an opportunity to quickly leverage the cloud or a hybrid mix of tools in this situation.

Conclusion

Finally, the players which win this battle of ‘ad-hoc and rapid analyses’ required to stay viable in the COVID economy will turn up as leaders of the new normal.

Conversely, the failure to see the COVID situation; popping open ‘new market segments’ or ‘splitting traditional ones’ or the rapid reengineering of business models; will severely handicap organizations’ growth and market capture strategies.

This was originally published on www.globalbankingandfinance.com website and is being reproduced here.

View

R&D in the Banking Sector: Making the case for Innovation Data Labs

R&D in the Banking Sector: Making the case for Innovation Data Labs

Ranked by industry percentages, what position do you think the banking industry occupies in global research and development spending?

Computing & Electronics, Healthcare, Auto, and Software & Internet are the top four (22%, 21%, 16% and 15% respectively). Banking does not occupy the fifth (Industrials), sixth (Chemicals and Energy) or even seventh (Consumer) spot.

In fact, R&D in the banking industry is so low, it is dumped as “others” at 1.9%.

To cut a long story short: Banks have a long way to grow their R&D initiatives, especially, in today’s shifting landscape of increasing regulatory pressures, rising customer expectations, innovative technologies, and nimbler challengers regularly combining to disrupt the financial services sector.

Let us next examine in some detail, a variable that admittedly has had the largest impact on a BFSI organization’s R&D capabilities, namely, Open Source Technology

The Evolutionary Impetus

Our appetite for far-reaching technology changes is matched (and fuelled) with the incredible leverage open source technologies bring across industries.

(Think of the ‘Unicorns’ formed around Open Source Technologies: Red Hat, MuleSoft, Databricks, Elastic NV, Confluent, Hashicorp)

In fact, Fintech Open Source Foundation (FINOS), in its 2018 white paper, makes the central argument succinctly, “The question is not whether to use open source but how to do it more strategically, efficiently, and extensively than your competitors. With digital disruption handled collectively by technology solutions that become de facto industry standards, financial services companies become defined not by their software, but by execution and differentiation in customer service

Traditionally large banks have protected all technology as Intellectual property and the driver of competitive advantage with large scale engineering teams building all software from scratch. In the last decade this has rapidly changed.

Open source technology (and other turnkey solutions) has made serious inroads in financial services.

Back end technologies: Servers supporting the massive compute landscape, data storage and processing, and trading infrastructure – essentially all back-end software capabilities – largely run on open source Linux platforms

Engineering layer: Financial services software development has been commoditized with the large-scale use of open source for network communications, database storage, workflow management, web application development and much more

“As a Service” offerings: While not necessarily open source, these range from infrastructure and compute power to software and entire platform offerings that are greatly reduce the customization and integration effort required by banks

Regulator-mandated openness & standardization: Regulators have started mandating the industry to open up, standardize and become more transparent, with Europe leading the drive with PSD2 and Open Banking in the UK.

Consequently, plugging in open source solutions means Banks today can free up precious resources to focus on integration and more importantly, concentrate on building their unique business value.

As the BFSI industry turns its attention to Fintechs to meet their digitization challenges; the eventual target areas for their R&D efforts does not waver: Develop and Deploy new technologies to better serve B2B banking customers, Increase profits, improve compliance and security preparedness and reduce infrastructure costs.

If the end-goals are similar, then where does the Banking R&D differentiation come from?

In one word: Reliability

To infuse reliability as a core rubric in its R&D paradigm means Banks have to check a number of boxes.

Firstly, Banks and Technology teams need to bedrock ‘reliability-as-a- yardstick’ in their partnerships; across vendors, across geographies, across platforms.

Secondly, Reliability is built over time by adopting a divergent approach.

The ‘traditional-hire-and-instruct-engineers-on-a-project-mode’, does not produce optimum test results because to harness advanced technologies necessitates an experimental mindset as opposed to the erstwhile engineering approach.

Finally, reliability comes at a cost.

To experiment with production in real time comes with a sizable expense – One, the cost of errors can be high (especially when teams start implementing on a project to realise the underlying thinking is incorrect and it has to begin anew), and Two, the multifarious skill base that runs these R&D experiments is rare to come by (it rather needs to be cherry picked and teethed on multiple R&D propositions); both actions require investments.

Unsurprisingly then barring a few major banks, in-house ‘sandbox environments’ have largely been the domain of a few ivy-league academia and an elitist start-up/incubator ecosystem, either options hardly conducive to support projects of large scale or variable scopes.

Banks today primarily focus on banking processes and not on creating horizontal pieces of technology. Rather than hiring technologists, they are beginning to partner with technology companies that bring the wherewithal to deliver on their high-value R&D outcomes.

Successful Innovation labs (data or digital) tightrope between real-life opportunities today with the possibilities of tomorrow; between applied technologies and blue-sky thinking.

The operative words here are “balance” and “focus”.

Guided by a set of concrete business benefits, banks seek Innovation labs; who given a specific problem statement pare it down to related experiments. Thereafter the labs set up, run and package Proof of Concepts (PoC). Then in quick turnaround, they come back with appropriate choices of solution architecture, and recommendations for underlying technologies that either addresses the specific business challenge or seizes the market place’ quantum opportunities.

This is the ‘playpen-mindset’: Flexible in approach but committed to the outcome; Balanced and Focussed.

This was originally published on Express Computers website and is being reproduced here.
Read now: R&D in the Banking Sector: Making the case for Innovation Data Labs

View

Beyond Chatbots : Conversational AI in Banking

Beyond Chatbots : Conversational AI in Banking

In 2017, American banks collected 34.3 billion dollars in overdraft fees. (link). The average value for an overdraft fee is $ 30 in the United States. This means that the average American ended up paying $ 125 in overdraft fees. In a world where the share of digital payments is growing rapidly, a digital assistant (or ABAs – Automated Banking Assistants) can help avoid such situations.

How Do ABAs Work?

AI Benefits the Business:

For decades, banks have used AI to automate their credit decisioning processes. From simple rules-based systems, they have have now evolved considerably. Products like Mindbox® have been used by mortgage servicers to predict questions that might be asked based on a customer’s past behaviour, recent transactions and their loan disposition.

By 2022, about 90% of all client banking interactions will be handled by Automated Banking Assistants (ABAs), saving $8 Billion annually (source). In addition to cost savings, the improved turnaround time will encourage the ABA to cross sell other bank products, thus actively expanding our business.

AI Benefits Consumers:

In developing countries, customers do not have the pervasive problem of overdraft fees. However, they can sometimes be careless with spending. Many of them engage in grey spending – paying for Netflix but never watching it, getting a 12-month gym membership, but dropping out. Third Party Apps like ‘AskTrim’ allow customers to not only to list out their spending but will also negotiate costs on Internet/ Phone bills. Consumers can also use the app to make timely payments, avoid fees and gain insight into their own spending

Banks were not far behind in introducing proprietary Automated Banking Assistants (ABAs), their reasoning being manifold. Good in-house chatbots would reduce the reliance on outside party apps, be safer, quicker, and give the bank more control on their interactions with customers. In addition, switching customers to automated advisors will help banks lower customer service costs.

Recent Developments:

Bank of America introduced an Automated Bank Assistant, ‘Erica’. It reached 1 million users in 2 months, Facebook took 10 months. ‘Erica’ supports advanced functionality like money transfers, account lockdowns and detailed transaction summaries.

When it comes to financial products, banks cannot afford to get recommendations wrong. Banks cannot make incorrect recommendations and subsequently reject customers for specific products. Likewise, Banks cannot make approvals that are not in line with the bank’s risk appetite. The former results in poor customer experience and the latter could directly impact the bank’s reputation. Hence, it is important that banks focus on making relevant offers to customers.

Banks often use the concept of ‘Next Best Offer’ or ‘Next Logical Product’ to best serve customers with complementary products to those already availed. Global banks have used event prediction techniques like Markov Chains to inform businesses of how likely customers are to adopt a new product (or even close an existing relationship). This would answer questions like:

“What is the probability that a customer will add a home loan to their product portfolio if they already hold 2 FDs and 3 credit cards?”

“What are the chances that a credit card will cancelled by a customer, now that they have a personal loan at a lower rate than what they were revolving at?”

Banks in developing economies have also started using such advanced analytics techniques. Let us discuss what we expect to see in the coming decade.

Improved Performance & Improved Adoption Cycle – A Virtuous Circle

Recommendation Systems are reactionary and obviously need a fair amount of previous data. In the next decade, an improvement loop will emerge – better services will attract more usage, which will give us better data to improve our services further. It is a classic case of more people using an improved product which drives even more innovation.

As innovation drives all forms of ABAs, we expect to see them in various regional languages – this will promote parity in banking.

Automated Portfolio Management

We expect to see many other sources of data being incorporated through the ABA system – like stock ticker data, detailed views for ETFs. The information parity coupled with an increased trust in automated systems will allow the average customer to be better invested in market instruments. We will no longer depend on fund managers and brokers.

Real time decisioning

Today’s automation revolves around process optimization, not decisioning. When we ask for a reduction of APR or an overdraft charge to be reversed – any time a money related decision is to be made, we always encounter a human who decides. They might consider the longevity of our banking relationship, number of favours we have asked of the bank in the past, amount of money in our account and a variety of other factors – but why can’t an ABA decide in the future?

The 2020s will be a revolutionary era for banking. The need for automation and convenience is implicitly obvious. We have many technologies that are already used in the tech industry that will translate seamlessly into the banking sector – for now we are only waiting on compliance. The next decade will become a time for us to bank and invest with more inclusivity, transparency and confidence. And of course, we hope to not talk to our bank tellers and branch managers again.

This was originally published on Finextra website and is being reproduced here.
https://www.finextra.com/blogposting/18829/beyond-chatbots–conversational-ai-in-banking

View

Enabling Indian Economy to recover faster from the Covid-19 impact

Enabling Indian Economy to recover faster from the Covid-19 impact

The case for creating banking data architecture that allows seamless information exchange by re-imagining existing data assets.

Along with other corporate and household sectors faced by lockdowns, the banking sector is witnessing humongous losses – especially in the financial markets. Decreased productivity, income slowdowns, reduced investor sentiments, supply chain disruptions, manufacturing hindrances, are unprecedented strains with significant economic repercussions in this year. This may overflow into the next.

Governments globally along with their central banks and supervisory institutions, are rolling out combative diverse measures – be they stimulus funds, liquidity injections, targeted loans to severely affected industry sectors, policy rate cuts or repayment moratoriums for borrowers of all commercial banks including housing finance companies and micro finance institutions.

Much of the globe’s economic rebuilding success – with the premise that the virus will plateau in the near future – will hinge on how swiftly the governments’ and regulators’ policy responses are converted into financial transactions. Speed of money will determine the speed of recovery.

The need of the hour are minds that look at the situation calmly and clearly – minds that ask precise questions and then, seek out pragmatic answers. Especially by leveraging the various existing data ecosystems and assets to forge relatively inexpensive and ready to deploy solutions without compromising on data security standards.

The challenge areas that governments and banks can rapidly de-risk, and de-clog are quite a few. But the most important one which will determine the success is the ability to seamlessly onboard new customers, exchange information – digital approvals, payments and settlements.

Apart from payment processing, the current work from home scenarios is adversely impacting multiple banking functions that are non- contactless and non-payment related – like Customer on boarding, underwriting and approvals of Mortgages, Loans, Lines of credit, Credit Cards etc.

In these Covid-19 times that mandate specific containment strategies, banks are forced to work on skeletal staffing and their associates cannot have physical contacts with the customer. This fact combined with strict adherence to regulatory procedures means banks have to think of migrating and operationalising their customer on boarding processes to a digital only infrastructure.

Let’s play out this scenario further for customer verification processes. Presently for higher value loans the applicants’ need to physically present themselves at a bank’s premise or for issuing a credit card, banks’ representative meets applicants for multiple verification proofs – residence, income and other KYC information etc.

Extending our case in point, why do we not think of piggybacking this transaction dynamic to our existing data assets? As in institutionalizing a complete e-onboarding process by creating ‘KYC sharing platforms’. What this essentially implies is, rather than going back to customers for laborious verification exercises, the proposed data architecture will re-utilize already verified and enriched data assets like AADHAR, PAN, Voter ID, Ration card, Driving License, Passport, KYC data held by Telecom authority, and GST company information etc. to generate socially distant but well connected and secure transactions. The advanced data can ensure that this is achieved without compromising on the accuracy or security of socially connected process.

Along with pooling customer identification data (from the mentioned sources), the architecture will need newly created information workflows. These workflows will vitally integrate small businesses that may not have bank accounts or be even registered as business entities. In fact, without this last-mile connectivity, governmental benefits – soft loans, benefits, and other relief measures – would not reach the intended recipients.

The clear idea here is to architect a data solution for seamless information exchange that accelerates delivery of financial services to both banked and non-banked population. By integrating data assets and pooling them in a way that does not compromise privacy and security, banking can become truly contactless to ensure government schemes and benefits are reaching the needy faster.

And all this without reinventing the wheel by leveraging the significant investments already made in creating GST infrastructure, the UIDAI architecture and the existing seamless payment provisioning processes.

Understandably the proposed concept will raise regulatory and jurisdictional questions pushing us to diligently work through; but given the unprecedented nature of the global challenge that knocks on our doors; it is time that private-public partners unite across social – economic – political spectrum to find pragmatic solutions that revitalize the economy – one definite step at a time.

In summary, digital-only information flows and workflows are a pre-requisite for benefits to flow into deserving wallets to ensure faster economic recovery.

This was originally published on ET CIO website and is being reproduced here.

View