How Our Partners at AWS Keep Drug Development Data Safe in the Cloud

For lots of drug developers, the cloud still sounds like a scary place to be.

In my role as Chief Technology Officer at QbDVision, one of the most critical tasks is helping customers and prospects navigate their concerns about data security and integrity. And especially how vendors can protect both in a cloud-based environment like QbDVision. 

Despite the many benefits of cloud infrastructure – from scalability, to processing agility, to reduced costs – it can be a tough sell in an industry where on-prem storage is as much of a default gold standard as paper-based processes. As I’ve heard all too many times, “We don’t want to be the next Merck.”

We address those very real concerns almost every day at QbDVision, often in tandem with our partners at Amazon Web Services (AWS) – the cloud infrastructure partner who powers our platform. Together we’ve talked through thousands of security and compliance questions with hundreds of drug developers. And together, we’ve helped many of them build the confidence they need to make the leap to the cloud. 

How? To answer that, I sat down with my colleague Reef D’Souza, Principal Solutions Architect at AWS, to talk about some of the questions he hears most – and how he helps drug developers understand the risks, mitigation strategies, and best practices they need to focus on to make a successful transition to the cloud.

So what are the top areas of concern in the drug development security landscape?

Reef: There are four big challenges that I hear about most.

1. Compliant data storage

From the clinical trials, to CMC data, to real-world evidence, drug developers have to retain an extraordinary amount of research, trial, and manufacturing information, and do so for prolonged periods of time. That data has to go somewhere. And for drug developers, that somewhere needs to comply with very stringent data integrity regulations

2. Data silos

Lots of drug developers collect data in a kaleidoscope of different points and formats across the value chain. This leads to an incredibly disjointed flow of information, pinhole visibility into existing data resources, and countless missed opportunities to reduce time and resource waste. 

In an ideal state, easy access to historical data and visibility into enterprise-wide data streams would help scientists, engineers, and researchers avoid duplicate work and focus on new, additive workstreams. Instead, with data flowing in from so many different, disconnected sources – lab instrumentation, remote clinical trial participants, post-market surveillance, and more – drug developers really struggle to identify trends, opportunities, and risks, and also ensure compliance. 

3. Analyzing large datasets

Today’s drug developers have to absorb, synthesize, and analyze a truly staggering amount of data points. It’s in the billions already, and innovations like synthetic trial designs, commercial market monitoring, and post-market surveillance keep adding billions more every year – from a surging number of internal and external sources.

4. Experience in cloud computing 

Lack of internal expertise and experience in cloud computing is also a growing blocker. Drug developers need a combination of talent that’s uniquely difficult to find: both a deep understanding of the life sciences and skill in building and maintaining cloud environments.

In just the last year, we’ve all seen what’s possible with foundation models trained on internet-scale public datasets. Those high-profile breakthroughs have put life science businesses under a lot of conflicting pressure: on one hand, to secure and safeguard sensitive data resources in every way they can, and on the other, to make that data available and accessible in formats that enable and accelerate innovation.

At the same time, lots of drug developers are feeling torn between the risks these challenges create and the unprecedented potential these data resources represent. In just the last year, we’ve all seen what’s possible with foundation models trained on internet-scale public datasets. Those high-profile breakthroughs have put life science businesses under a lot of conflicting pressure: on one hand, to secure and safeguard sensitive data resources in every way they can, and on the other, to make that data available and accessible in formats that enable and accelerate innovation.

The diversity of that data makes this balancing even more of a challenge. We’re talking about laboratory, clinical, manufacturing, customer, personal health data, and more. At a lot of organizations, it’s all they can do to keep up with the compliance requirements for all this information, to say nothing of innovating with it (especially when they’re still using labor-intensive, paper-based, point-in-time validation processes).

With that volume of data, and that amount of regulation to navigate, automation is really the only answer for organizations that want to efficiently maintain an optimal security and compliance posture. Many of them are already making that switch using data management tools like AWS Clean Rooms, Amazon DataZone, and AWS Lake Formation – all great ways to make data available with fine-grained access controls (AWS Identity and Access Management (IAM)) and best-in-class encryption (AWS Key Management Service (KMS)).

We also offer services like AWS CloudTrail, Amazon GuardDuty, AWS Security Hub, Amazon Macie, and Amazon Security Lake, as well as access to a wide assortment of third-party offerings in the AWS Marketplace. They can all be used to provide security posture observability and audibility to ensure responsible data usage.

What do you say to smaller drug developers who ask if the cloud is a safe space for them to operate?

Yeah, at AWS, we know that companies can have very specific data challenges and requirements in the early stages of their business’ development. 

That’s why we offer a dedicated portal that helps support founders and growth-mode ventures with tools and solutions specifically tailored to fast-paced startup environments. We have programs like AWS Activate that help companies experiment with credits to offset their AWS bills, and Startup Security Baseline to provide startup-focused guidance on AWS security best practices. 

Biotech companies looking to quickly deploy a secure foundation with AWS infrastructure can also utilize our Biotech Blueprint. It’s essentially a reference architecture for scientific applications, and is geared specifically to biotechnology companies that want to manage their software in the cloud.

This support scales seamlessly too. As organizations grow, we’re there to help them manage security standards and compliance certifications, providing them with the tools, services, and visibility they need to move fast while remaining secure and compliant. The security and availability of our infrastructure really is perfectly aligned with life science organizations requirements for validated and controlled workloads. 

On top of that, we also provide documentation tools, guidance, and compliance experts to help companies build applications that support their GxP and health data privacy compliance. By moving GxP-regulated workloads to AWS – and also utilizing services like AWS CloudTrail, Amazon EventBridge, and AWS Config – life sciences companies can achieve near real-time, continuous compliance through automation, simplify change management and tracking, and enhance data backup and recovery.

A lot of drug developers still see on-prem storage as the default gold standard for security. What are some advantages of the cloud that they should consider?

One really important thing to keep in mind is that security and compliance are a shared responsibility between the customer and a cloud infrastructure provider like AWS. Customers’ exact responsibilities can vary depending on which services they choose and use, how those services are integrated into their IT environment, and applicable laws and regulations. But ultimately, a joint model like ours offers end users a lot of flexibility and control. 

For one, this shared model can help relieve a lot of the customer’s operational burden. At AWS, we operate, manage, and control components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates. The customer only manages and assumes responsibility for the guest operating system (including updates and security patches), other associated applications, and the configuration of the AWS-provided security group firewall. 

When we provide a cloud environment for a drug developer, we also share management, operation, and verification of IT controls. For example, we’re responsible for securing the physical infrastructure where the environment is deployed, which may previously have been the customer’s responsibility. But that’s just one of many ways our cloud deployments can be customized. We work closely with every customer to create the distributed control environment that works best for them, and offer detailed control and compliance documentation to help them perform their required control evaluation and verification procedures. 

There are also a lot of benefits to partnering with Independent Software Vendors (ISVs) who have built their solutions on AWS. When drug developers onboard tools like QbDVision, they directly inherit all the policy policy, architectural, and operational best practices AWS has established to satisfy the requirements of our most security-sensitive customers. It’s an opportunity to follow paths that have been well defined by many other organizations and technical practitioners who have created successful AWS-based infrastructure and SaaS products.

When we provide a cloud environment for a drug developer, we also share management, operation, and verification of IT controls. We work closely with every customer to create the distributed control environment that works best for them, and offer detailed control and compliance documentation to help them perform their required control evaluation and verification procedures.

A good example is the managed services we provide for access control, encryption, network firewalls, and logging, all of which are critical priorities for any vendor with drug development users. Having us manage these security-of-service factors enables SaaS vendors to keep their focus where it matters most to their customers: identifying and meeting new feature demands. With our managed security services, ISVs can often test, tune, and ship multiple major releases a year. That’s MUCH faster than upgrades to any on-prem software.

On top of that, we often find that many businesses using on-prem software are also running a variety of different versions of their software, hardware, and architecture – which seriously limits the utility of that system beyond each individual instance. With cloud deployments, ISVs can offer real-time, many-to-many collaboration features that simply aren’t possible with traditional license models.

Resources like our prescriptive guidance library, publicly available aws-samples on github, and AWS Service Catalog further enable ISVs to maximize the time they spend on value creation for their customers – instead of building from scratch. The AWS CDK framework also enables SaaS developers to define IaC in a repeatable, collaborative way that reduces workload and enhances compliance. Once builders have adapted our IaC templates to their business’ specific constraints and parameters, those tools can be shared and leveraged across the organizations to accelerate development and deployment of new features.

Lastly, cloud deployment also typically gives SaaS vendors visibility into their customers’ metadata configuration. That can create valuable opportunities for proactive validation and reporting on security and compliance posture, which can take a lot of burden off the end user.

Drug developers often ask us “Is data in AWS really safe?” What’s your response to that question?

When I need to build a drug developer’s confidence in AWS, there are a couple of places I typically start. 


One of our top priorities is ensuring that customers have full confidence in our security apparatus. We offer them a substantial amount of information about our IT control environment, including technical papers, reports, certifications, other third-party attestations, and more. This documentation helps customers understand the relevant controls we have in place, see how they’ve been verified, and how and where those controls will function in their extended IT environment.

That can be a big shift for some drug development customers. Traditionally, with on-prem systems, these businesses would have internal and/or external auditors conduct a process walk-through and evidence evaluation to validate the design and operational effectiveness of security controls. This type of direct observation and verification is often drug developers’ default.

With a cloud provider like AWS, these customers can simply request and evaluate a wide range of third-party attestations and certifications – all of which provide assurance that the design and effectiveness of our controls have been validated by a qualified, independent third party. We find these resources are an especially great way to build confidence and accelerate compliance reviews when customers are concerned that a vendor has “offloaded” security to AWS. We can easily show that even a shared control environment can function as a comprehensive, unified, and highly effective framework.

Some customers are also interested to learn about the risk and compliance program we’ve implemented throughout our organization. This program aims to manage risk in all phases of service design and deployment and continually improve and reassess the organization’s risk-related activities.

Technical details

To get customers comfortable with AWS infrastructure, we also like to highlight Amazon Elastic Compute Cloud (Amazon EC2): a web service that provides secure, resizable compute capacity in the cloud. With the AWS Nitro System as its underlying platform, EC2 designed to make web-scale cloud computing easier for developers.

By design, Nitro has no operator access. There’s no way to log in to EC2 Nitro hosts, access the memory of EC2 instances, or touch any customer data stored on either locally encrypted instance storage or remote encrypted EBS volumes. If any AWS operator needs to do maintenance work on an EC2 server – even when they have the highest possible access privileges – they can only use a limited set of authenticated, authorized, logged, and audited administrative APIs. 

None of these APIs give operators the ability to access customer data on the EC2 server. And because these controls are built into the Nitro System itself, no AWS operator can bypass them.

They don’t have to take our word for it, either. These built-in security protocols have been thoroughly vetted by third parties.

We also get a lot of questions about creating validated environments in AWS. Is that possible?

When security stakeholders are still anchored in on-prem deployments, one thing they often don’t realize is that the basic principles governing on-prem infrastructure qualification still absolutely apply to virtualized cloud infrastructure. GxP cloud applications still need to be validated and their underlying infrastructure still needs to be qualified. We follow all the same current industry guidance to do so. 

Traditionally, though, it was regulated companies’ responsibility to qualify their infrastructure and validate their local apps. With virtualized deployments, the cloud provider shares that responsibility with the customer. The company still “owns” the overall security program, but the supplier is now responsible for qualifying the physical infrastructure, virtualization, and service layers, and fully manages the services they provide. 

Striking that balance of responsibilities is a process, though, and a big part of successfully deploying compliant cloud infrastructure. AWS has helped many customers along this journey, and there’s no compression algorithm for that experience. 

If any customers would like to know more about what that process looks like, I encourage them to read our whitepaper on the customer journey to GxP compliance.

When security stakeholders are still anchored in on-prem deployments, one thing they often don’t realize is that the basic principles governing on-prem infrastructure qualification still absolutely apply to virtualized cloud infrastructure. GxP cloud applications still need to be validated and their underlying infrastructure still needs to be qualified. We follow all the same current industry guidance to do so.

Tell me about how AWS ensures data integrity. How do you help drug developers comply with regulations like 21.CFR.11?

We offer a couple of services that support that critical compliance checkpoint.


Amazon Relational Database Services (RDS)

Our global infrastructure is built around two key compartmentalization structures: 

  • AWS Regions contain multiple physically separated and isolated Availability Zones, which are connected by low-latency, high-throughput, highly redundant networks. 
  • Availability Zones are designed so customers can operate applications and databases that automatically fail over between zones without interruption. These compartments are much more highly available, fault tolerant, and scalable than traditional infrastructures built around one or more data centers.


During normal operations, Amazon RDS does a couple things automatically: it creates and saves automated backups of your database (DB) instance, then creates a storage volume snapshot of it. That backs up the entire instance, not just individual databases.

Amazon RDS also uses built-in replication functionality in MariaDB, MySQL, Oracle, and PostgreSQL DB engines to create a special type of DB instance called a “read replica,” which is based on a source DB instance. Updates made to the source DB instance are asynchronously copied to the read replica, which reduces the load on your source DB instance by routing read queries from your applications to the copy. 

For read-heavy database workloads, this technique dynamically scales your infrastructure’s processing power far beyond the capacity of any single DB instance. You can also promote a read replica to a standalone instance as a disaster recovery solution if the source DB instance fails.


Amazon Simple Storage Services (S3)

S3 is a powerful versioning tool: Customers can use it to keep multiple versions of an object in one bucket so it can be dynamically restored if it’s accidentally deleted or overwritten.

It’s also natively integrated with AWS Backup, a fully managed, policy-based service that can be used to centrally define backup policies to protect your data in Amazon S3. After you define your backup policies and assign Amazon S3 resources to them, AWS Backup automates the creation of Amazon S3 backups and securely stores the backups in an encrypted backup vault that you designate in your backup plan.

This service also has some really valuable replication tools that enable objects to be automatically, asynchronously copied across Amazon S3 buckets. Buckets that are configured for object replication can be owned by the same AWS account, or by different accounts. Users can then replicate objects to a single destination bucket or to multiple destination buckets. The destination buckets can be in different AWS Regions, or within the same Region as the source bucket.

Live replication with Cross-Region Replication (CRR) also makes it really easy to automatically replicate new objects as they are written to their target bucket.

One of the trickiest things about cybersecurity is that risks are constantly changing. How does AWS continually evolve its security posture?

Absolutely, there’s no one-and-done in cloud security. As more and more users embrace the cloud, one of our top priorities is helping them always stay one step ahead on security, identity, and compliance – whatever their specific requirements may be.

Our integrated risk and compliance program is a critical part of that effort. It’s an essential feature of how we manage risk in all phases of service design and deployment, and also continually reassess and refine our own risk-related activities. We also have a business risk management (BRM) program that partners with AWS business units to give our Board of Directors and senior leadership a holistic view of key risks across AWS.

At the same time, we also sponsor weekly, monthly, and quarterly meetings and reports to – among other things – ensure that new threats and challenges are clearly communicated across our risk management program. When high-priority risks emerge, they trigger an escalation process that elevates those risks to management level. Together, these processes help ensure that risk is consistently managed across our complex operations.

Security controls as an essential component of AWS risk management. We have a robust environment of standards, processes, and structures that serves as the foundation of our security requirements for AWS – and that also leverage aspects of Amazon’s overall control architecture.

Automating these controls is one important way we ensure they’re implemented properly: by reducing human intervention in certain recurring processes within the AWS control environment. Humans are notoriously bad at consistently executing repetitive tasks, so automating away that risk factor helps us implement our security strategy with minimal design deviations.

When a customer adopts an AWS service, we also run through multiple pre- and post-deployment steps that help us maximize information security. These activities are built into our services during the design and development process, and help us continually verify our services’ operational security once they’re in-use.

There’s no one-and-done in cloud security. As more and more users embrace the cloud, one of our top priorities is helping them always stay one step ahead on security, identity, and compliance – whatever their specific requirements may be. Our integrated risk and compliance program is a critical part of that effort.

They include two pre-launch activities and two post-launch activities:


  • AWS Application Security risk management review to validate that security risks have been identified and mitigated
  • Architecture readiness review to help customers ensure alignment with compliance regimes


  • AWS Application Security ongoing review to help ensure service security posture is maintained
  • Ongoing vulnerability management scanning

AWS also participates in 50+ different audit programs, and regularly undergoes independent third-party attestation audits to verify that our control activities are operating correctly. Depending on the relevant region and industry, these audits may compare our risk management program with a variety of global and regional security frameworks. 

The results of these audits are documented by the assessing body and made available for all AWS customers through AWS Artifact: a free self-service portal for on-demand access to AWS compliance reports. 

Many customers are also interested to learn how our large public network footprint helps AWS threat intelligence deter malicious actors. You can read more about that here.

Say a drug developer is interested in a new AWS-based SaaS solution. What standards and compliance structures should they look for?

This is one of the great things about adopting any application built on AWS: the fact that it’s deployed on our cloud infrastructure means that that solution has been tested and certified to the highest possible standards:

  • To get on the AWS Partner Network (APN) and AWS Marketplace, every SaaS vendor must go through extensive security reviews. 
  • Partners who want a Competency designation must demonstrate technical proficiency and proven customer success in their solution areas. 


AWS Life Sciences Competency Partners must also demonstrate technical expertise and customer success specifically in Life Science solutions. They need to have shown that they can securely store, process, transmit, and analyze clinical information. We don’t allow solution builders to generalize experience from other areas.

Generally speaking, what are some of the qualities that signal a strong, secure cloud-based SaaS solution?

Any time I speak with a customer who’s evaluating a cloud based solution, I recommend they look for three things:

  • Attestation of security processes and controls via their SOC2 reports 
  • Specific accreditations required by their industry (HIPAA, PCI, etc.)
  • A builder culture where everyone takes ownership of security and builds security into all their technology.
  • A partnership with AWS – one shows they’re using purpose-built services to ensure robust security, end-to-end compliance, and holistic resilience in both the development and deployment of their solution.

Sum it up for me: Why should drug developers be comfortable putting their data in the cloud?

I firmly believe that earning customers’ confidence should be the foundation of any business like AWS.

Cloud suppliers should know you trust us to protect your most critical and sensitive assets: your data. We earn that trust by working closely with you to understand your data protection needs, and by offering the most comprehensive set of services, tooling, and expertise to help you protect your information resources. 

If you use AWS as your supplier, we enable you to control your data in multiple ways:

  • AWS services and tools that determine where your data is stored, how it is secured, and who has access to it 
  • Services like AWS Identity and Access Management (IAM) that allow you to securely manage access to AWS services and resources
  • Tools like AWS CloudTrail and Amazon Macie that enable detection and auditing of sensitive data and actions taken on that data 
  • Services like AWS CloudHSM and AWS Key Management Service (KMS) that allow you to securely generate and manage encryption keys
  • AWS Control Tower for governance and controls for data residency


Drug developers can also be confident to know that we leave and breathe the same regulatory and compliance requirements that they do. We provide the tools, guidance, and expertise they need to manage regulatory, HITRUST, and GDPR compliance – all of which are built into our life sciences value chain. On top of over 130 HIPAA-eligible services, we have certifications for compliance with ISO/IEC and over 500 features and services focused on security and compliance.

Those are just a few of the host of reasons why AWS is a preferred life sciences cloud provider at nine out of the top ten global pharma companies. For almost a decade now, industry leaders like Moderna, Novartis, AstraZeneca, Pfizer, and Eli Lilly have relied on us to enhance data liquidity, optimize for operational excellence, and personalize customer engagement. 

Every year, more and more businesses like these are choosing AWS for our unmatched breadth and depth of services; the maturity, security and reliability of our cloud platform; and our deep understanding of life sciences industry regulations and compliance.


Want to take a closer look at how we handle cloud security at QbDVision?

We’d love to give you a detailed look at all the ways we protect our customers’ data. Head over to our trust page or reach out to learn more from our cybersecurity experts.

Robert Dimitri, M.S., M.B.A.

Director Digital Quality Systems, Thermo Fisher Scientific

Robert Dimitri is a Director of Digital Quality Systems in Thermofisher’s Pharma Services Group. Previously he was a Digital Transformation and Innovation Lead in Takeda’s Business Excellence for the Biologics Operating Unit while leading Digital and Data Sciences groups in Manufacturing Sciences at Takeda’s Massachusetts Biologics Site.

Devendra Deshmukh

Global Head, Digital Science Business Operations, Thermo Fisher Scientific

Devendra Deshmukh currently leads Global Business Operations for Digital Science Solutions at Thermo Fisher Scientific. In this role he oversees operations broadly for the business across its product portfolio and leads the global professional services, technical support, and product education teams.

Grant Henderson

Sr. Dir. Manufacturing Science and Technology, VernalBio

Grant Henderson is the Senior Director of Manufacturing Science and Technology at Vernal Biosciences. He has years of expertise in pharmaceutical manufacturing process development/characterization, advanced design of experiments, and principles of operational excellence.

Ryan Nielsen

Life Sciences Global Sales Director, Rockwell Automation

Ryan Nielsen is the Life Sciences Global Sales Director at Rockwell Automation. He has over 17 years of industry experience and a passion for collaboration in solving complex problems and adding value to the life sciences space.

Shameek Ray

Head of Quality Manufacturing Informatics, Zifo

Shameek Ray is the Head of Quality Manufacturing Informatics and Zifo and has extensive experience in implementing laboratory informatics and automation for life sciences, forensics, consumer goods, chemicals, food and beverage, and crop science industries. With his background in services, consulting, and product management, he has helped numerous labs embark on their digital transformation journey.

Max Peterson​

Lab Data Automation Practice Manager, Zifo

Max Petersen is the Lab Data Automation Practice Manager at Zifo responsible for developing strategy for their Lab Data Automation Solution (LDAS) offerings. He has over 20 years of experience in informatics and simulation technologies in life sciences, chemicals, and materials applications.

Michael Stapleton

Life Sciences Luminary and Influencer

Michael Stapleton is a life sciences leader with success spanning leadership roles in software, consumables, instruments, services, consulting, and pharmaceuticals. He is a constant innovator, optimist, influencer, and digital thought leader identifying the next strategic challenge in life sciences, executing and operationalizing on high impact strategic plans to drive growth.

Matthew Schulze

Head of Digital Pioneering Medicines & Regulatory Systems, Flagship Pioneering

Matt Schulze is currently leading Digital for Pioneering Medicines which is focused on conceiving and developing a unique portfolio of life-changing treatments for patients by leveraging the innovative scientific platforms and technologies within the ecosystem of Flagship Pioneering companies.

Daniel R. Matlis

Founder and President, Axendia

Daniel R. Matlis is the Founder and President of Axendia, an analyst firm providing trusted advice to life science executives on business, technology, and regulatory issues. He has three decades of industry experience spanning all life science and is an active contributor to FDA’s Case for Quality Initiative. Dan is also a member of the FDA’s advisory council on modeling, simulation, and in-silico clinical trials and co-chaired the Product Quality Outcomes Analytics initiative with agency officials.

Kir Henrici

CEO, The Henrici Group

Kir is a life science consultant working domestically and internationally for over 12 years in support of quality and compliance for pharma and biotech. Her deep belief in adopting digital technology and data analytics as the foundation for business excellence and life science innovation has made her a key member of PDA and ISPE – she currently serves on the PDA Regulatory Affairs/Quality Advisory Board

Oliver Hesse

VP & Head of Biotech Data Science & Digitalization, Bayer Pharmaceuticals

Oliver Hesse is the current VP & Head of Biotech Data Science & Digitalization for Bayer, based in Berkeley, California. He has a degree in Biotechnology from TU Berlin and started his career in a Biotech start-up in Germany before joining Bayer in 2008 to work on automation, digitalization, and the application of data science in the biopharmaceutical industry.

John Maguire

Director of Manufacturing Sciences, Sanofi mRNA Center of Excellence

With over 18 years of process engineering experience, John is an expert in the application of process engineering and operational technology in support of the production of life science therapeutics. His work includes plant capability analysis, functional specification development, and the start-up of drug substance manufacturing facilities in Ireland and the United States.

Chris Kopinski

Business Development Executive, Life Sciences and Healthcare at AWS

As a Business Development Executive at Amazon Web Services, Chris leads teams focused on tackling customer problems through digital transformation. This experience includes leading business process intelligence and data science programs within the global technology organizations and improving outcomes through data-driven development practices.

Tim Adkins

Digital Life Science Operations, ZÆTHER

Tim Adkins is a Director of Digital Life Sciences Operations at ZÆTHER, serving the life science industry by assisting companies reach their desired business outcomes through digital IT/OT solutions. He has 30 years of industry experience as an IT/OT leader in global operational improvements and support, manufacturing system design, and implementation programs.

Blake Hotz

Manufacturing Sciences Data Manager, Sanofi

At Sanofi’s mRNA Center of Excellence, Blake Hotz focuses on developing data ingestion and cleaning workflows using digital tools. He has over 5 years of experience in biotech and holds degrees in Chemical Engineering (B.S.) and Biomedical Engineering (M.S.) from Tufts University.

Anthony DeBiase

Offering Manager, Rockwell Automation

Anthony has over 14 years of experience in the life science industry focusing on process development, operational technology (OT) implementation, technology transfer, CMC and cGMP manufacturing in biologics, cell therapies, and regenerative medicine.

Andy Zheng

Data Solution Architect, ZÆTHER

Andy Zheng is a Data Solution Architect at ZÆTHER who strives to grow and develop cutting-edge solutions in industrial automation and life science. His years of experience within the software automation field focused on bringing innovative solutions to customers which improve process efficiency.

Sue Plant

Phorum Director, Regulatory CMC, Biophorum

Sue Plant is the Phorum Director of Regulatory CMC at BioPhorum, a leading network of biopharmaceutical organizations that aims to connect, collaborate, and accelerate innovation. With over 20 years of experience in life sciences, regulatory, and technology, she focuses on improving access to medicines through innovation in the regulatory ecosystem.

Yash Sabharwal​

President & CEO, QbDVision

Yash Sabharwal is an accomplished inventor, entrepreneur, and executive specializing in the funding and growth of early-stage technology companies focused on life science applications. He has started 3 companies and successfully exited his last two, bringing a wealth of strategic and tactical experience to the team.

Joschka Buyel

Senior MSAT Scientist at Viralgen, Process and Knowledge Management Scientist at Bayer AG

Joschka is responsible for the rollout and integration of QbDVision at Bayer Pharmaceuticals. He previously worked on various late-stage projects as a Quality-by-Design Expert for Product & Process Characterization, Process Validation, and Transfers. Joschka has a Ph.D. in Drug Sciences from Bonn University and a M.S. and B.S. in Molecular and Applied Biotechnology from the RWTH University.

Luke Guerrero

COO, QbDVision

A veteran technologist and company leader with a global CV, Luke currently oversees the core business operations across QbDVision and its teams. Before joining QbDVision, he developed, grew, and led key practices for international agency Brand Networks, and spent six years deploying technology and business strategies for PricewaterhouseCoopers’ CIO Advisory consulting unit.

Gloria Gadea Lopez

Head of Global Consultancy, Business Platforms | Ph.D., Biosystems Engineering

Gloria Gadea-Lopez is the Head of Global Consultancy at Business Platforms. Using her prior extensive experience in the biopharmaceutical industry, she supports companies in developing strategies and delivering digital systems for successful operations. She holds degrees in Chemical Engineering, Food Science (M.S.), and Biosystems Engineering (Ph.D.)

Speaker Name

Speaker’s Pretty Long Title, Specialty, and Business

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam dignissim velit et est ultricies, a commodo mauris interdum. Etiam sed ante mi. Aliquam vestibulum faucibus nisi vel lacinia. Nam suscipit felis sed erat varius mollis. Mauris diam diam, viverra nec dolor et, sodales volutpat nulla. Nam in euismod orci.