The Computer Weekly Developer Network gets high-brow on low-code and no-code (LC/NC) technologies in an analysis series designed to uncover some of the nuances and particularities of this approach to software application development.
Looking at the core mechanics of the applications, suites, platforms and services in this space, we seek to understand not just how apps are being built this way, but also… what shape, form, function and status these apps exist as… and what the implications are for enterprise software built this way, once it exists in live production environments.
Sandal writes here on the issues relating to applying low-code no-code principles to enterprise data integration in financial services and writes as follows…
For anyone working with data in a financial services environment, ease of use and access to information and the ability to analyse it is key. It is the main reason why Excel and visualisation tools such as Tableau and Qlik became so popular. It also explains, in part, why the industry is increasingly embracing ‘low code, no code’ principles that promise business users [varying degrees of] self-service in configuring applications or creating new ones altogether – with very short turnaround times to boot.
As we know, low-code [and to a greater extent] no-code is an approach that drives business user enablement across industries by making it easier for business users with little detailed analytical knowledge to design, build and launch applications quickly. All without having to go to a development environment and write scripts, or Java or Python code. Today, even ‘no code’ platforms typically have high levels of information entitlements, compliance and security built-in.
However, low-code no-code can also be seen as a new label for ideas that have been around for decades and that used to be called 4GL and that also provided code generation hidden behind graphical configuration.
What we are now seeing is the widespread application of low-code no-code principles, helping to change the way data management and analytics processes are conducted across financial services firms. In other words, configuration of data flows across different applications to support and set-up business processes end to end.
Wider business process configuration
Business intelligence tools that allow users to visualise datasets and create dashboards and alerts are key to the analytics processes within financial services firms today. They have been heavily used by data analysts and quants but historically, they have tended to be confined to end points of data flows and data processes where data analysis and visualisation took place.
One limiting factor used to be the diversity in naming conventions, terminology and data models between different business processes and departments.
Good data governance and data management practices help address that and the use of low-code no-code principles is now spreading to the configuration of data acquisition, quality management and distribution processes themselves and from the end of data management processes right into the middle of them. Moreover, rather than being the exclusive domain of analysts, this work can now be done by any business user with a broad understanding of data management. Data collection configuration: data quality checks and data cleansing, once delivered by the use of scripts and specialist toolsets, can now be performed through mouse clicks and simple ‘drag and drop’ functions.
Use of coding is typically restricted to areas where developers push the boundaries of business logic and code new, proprietary models or rules – everything else will become a matter of configuration. In other words, we are today seeing the march of self-service approaches to data analytics, powered by low-code no-code principles, moving from the endpoint of data management, where data visualisation and business intelligence have historically been carried out, right into the middle of the data management process.
Differentiation via information provisioning
Of course, delivering on these low-code no-code principles is not sufficient in itself. Proper governance around configurations, granular permissioning and controlled deployment is a necessary precondition to create the optimal blend of rigour and flexibility.
Enterprise data management solutions that acquire data sets, integrate them, standardise terminology, check quality and then supply business applications with the information they need often fall short when it comes to quickly onboarding new data sets, changing data mastering processes or connecting new downstream users and applications.
Applying the same low-code no-code principles to the set-up of processes for the onboarding of new data sets, the configuration of data cleansing workflows and validation rules and the shaping of data sets so they can be easily picked up by any business application downstream can radically cut the length of change cycles. A precondition for this is a rigorous data management application beneath that is sufficiently versatile to lend itself to a wide range of different formats, business rules and delivery methods and rigorous enough in the sense of tracking entitlements and permissions in a granular way as well as keeping tabs of the lineage of information so that every data point can be tracked and traced if needed.
This in itself drives business agility, speeds up decision making and enables the rapid development of applications for operational efficiency and sustainability. After all, the speed at which financial services firms onboard and digest new information and incorporate it into their decision making is a major if not the main differentiator.
Looking ahead, we are on the cusp of a new age in financial data management. Today, technology, process design and data onboarding capabilities are joining forces to bring analytics and data together and low-code no-code is helping to ensure that every business user is optimally provisioned with the information they need.
The result for financial institutions is a new world of opportunity where they optimise data ROI, speed up decision making and drive user enablement.