Data Management Self Service is Key for Data Engineers–And Their Business
In a post-COVID-19 world, remote access has rapidly emerged as the new normal, for every organization. The shift to a remote-first world was already well underway. In fact, in 2019, 54% of U.S. workers reported working remotely at least once per month. But there’s no question the pandemic has accelerated inevitable change.
According to Gartner, 74% of CFOs expect to permanently transition many employees to remote work. In spite of all the disruption, these changes are also opening up some new opportunities to help employees be more engaged and satisfied—and get more done while they’re at it. Another survey by Kickstand Communications found that 85% of employees enjoy working from home, while 27% say they’re more productive.
In the healthcare arena, providers are responding to the COVID challenge by making healthcare services more accessible to more people, through self-service. For example, Babylon Health has launched an automated new service to provide people with self-service mobile access to real-time diagnostics, consultation, and symptom logging, all powered by artificial intelligence (AI). Healthcare providers and office staff save time, while staying safe by interacting with patients remotely.
Self-Service is a Smarter Data Strategy
Self-service is gaining momentum across the enterprise, as people increasingly expect an effortless experience at scale. We’re already seeing it in service desk use cases that let business users trigger automated workflows like password resets, ordering new laptops, and other tasks. Like many tech professionals, data engineers are increasingly working remotely, and equipping them with a self-service approach to working with data could bring these types of benefits to the data world, to enable them to accelerate digital transformation and get solutions to market faster.
All too often, today’s data engineers spend much of their time going back and forth between infrastructure teams and business stakeholders to get access to data. There’s no reason why they shouldn’t be able to take advantage of more self-service approaches to working with data. The key is getting the right data to the right people, so they can ask and answer their own questions.
Data is the product, and owning it end to end is the best way to serve it. But to do it, you need the right tooling for engineers and their business partners to support a self-service system, so they can work with data from anywhere. You also need to improve accessibility using a baseline like SQL, to reduce the threshold for developer productivity. Setting up a common way to talk about data also makes it easier for business partners to collaborate in your data initiatives and provide context and guidance.
A data mesh architecture is a natural for empowering data engineers who are working remotely. It’s all about accessibility and personalization, and it lets you move beyond centralized, monolithic data warehouses and data lakes to deliver access to data at scale. Instead of struggling with the limitations of a data lake, you can utilize a distributed data architecture that’s owned by cross-functional teams. Although a data mesh is very much an ecosystem, it provides centralized governance and is standardized for interoperability.
By providing data engineers working at dispersed locations with self-service tools for data discovery, analytics, and data catalog platforms, they can build out data meshes, and create a more collaborative approach that lets organizations share data insights faster.
Increasingly, the role of today’s data engineers is less about manipulating data, and more about providing tooling that their business stakeholders can use to work with data on their own. Ultimately, self service principles can enable more stakeholders across an organization to work more directly with data—and free up data engineers to focus on creating and providing the tools and initiatives that help accelerate transformation and drive business growth.
For example, a hedge fund manager analyzing stock market futures for apples could gain hands-on access to regional weather data, historical harvest metrics, and other key metrics. Instead of chasing down a data analyst and explaining her requirements, she could acquire direct access to all the information she needs to conduct an analysis and produce a report for investors. A data mesh is a powerful way to drive better outcomes, like accelerating delivery of time-sensitive financial services, by giving the right people access to the right data, regardless of where they are working.
Balancing Data Access with Compliance and Governance
Self-service certainly frees up data-driven organizations from old constraints, but it does not mean data access is a free-for-all. In some situations, such as systems involving business intelligence (BI), self-service tools can even introduce new risks. Although you always want to remove bottlenecks to data, you still need to retain full control over compliance and governance.
Self-service data discovery tools that let users discover datasets, query them and analyze results help make data use more open and ethical, because data is not always locked away with a centralized team, but shared throughout the organization on a data mesh environment. The data mesh architecture provides visibility and transparency into who is using data—and how. Self-service tools put the right technology in reach to help data engineers accelerate their work and drive business agility, with the visibility the organization needs to avoid compromising data ethics.
There’s no question that an increasingly remote workforce poses real challenges that span every organization. But when it comes to data engineers, you can still build the collaborative culture you need to drive data intensity. Applying a self-service approach to support your data engineers will help them work better together, and engage more in business imperatives around digital transformation and the customer experience. With the right strategy, you give your developers the boost they need to be more productive and speed time to market of data experiences and applications.
About the author: Andrew Stevenson is the chief technology officer and co-founder of Lenses.io. He leads the company’s world-class engineering team and technical strategy. Andrew started as a C++ developer before leading and architecting big data projects in the banking, retail, and energy sectors including Clearstream, Eneco, Barclays, ING, and IMC. He is an experienced fast data solution architect and highly-respected open source contributor with extensive data warehousing knowledge. His areas of expertise include DataOps, Apache Kafka, GitOps, and Kubernetes and the delivery of data driven applications and big data stacks.