21 CFR Part 4

Part 4 was added to 21 CFR, effective July 22, 2013. This is a new regulation, intended to help identify and clarify which rules apply to Combination Products. It provides regulatory framework and defines which Parts of 21 CFR apply to facilities that manufacture single-entity or co-packaged combination products.

In the past manufactures of Combination Products had to guess at what regulation(s) applied to their specific product. Often, manufactures opted to follow the Drug GMPs (Part 211), avoiding some of the messier portions of Part 820.

Part 4 clarifies matters by requiring that single-entity or co-packaged combination products must comply with the specifics of all relevant regulations. In most cases, this will mean that quality system for Combination Products will have to comply with both Parts 211 and 820.

Where requirements overlap, the FDA recommends following whichever guideline is more specific. The examples below help clarify the FDA’s recommendation:


Quality System

cGPM Requirements

QSR Requirements

Recommended Guideline to Follow



(Part 210 and 211)

Specific requirements for calculation of yield

(21 CFR 211.103)

General requirement for calculation of yield as part of design validation

(21 CFR 820.30)

Drug cGMP

(21 CFR 211.103)



(Part 820)

General CAPA requirements identified as part of Production Record Review

(21 CFR 211.192)

Detailed CAPA requirements

(21 CFR 820.100)


(21 CFR 820.100)


However, when constituent parts are manufactured and marketed separately, only the respective requirement that applies to the constituent part must be followed, until the constituents are produced as a single-entity or co-packaged.  For example, Company A produces a bulk drug product and Company B produces a delivery system for the bulk drug product. Company C co-packs the two products as a single product:  

Part 4
























If an organization has already established a Quality System that is based on either cGMP or QSR requirements, there is a risk of being out of compliance based on the new clarification provided in Part 4. Because maintaining 2 separate quality systems would be counterproductive, the FDA recommends expanding the existing systems to contain any missing elements. For example:

·         When a combination drug and a device product is manufactured by a company with a Quality System designed on drug cGMPs, the requirements which are not duplicated in Part 820 must also be considered in the manufacturing process (e.g., CAPA, Design Controls, etc.)


·         When a combination drug and device product is manufactured by a company with a Quality System designed on Part 820, the requirements which are not duplicated in Part 210 and 211 must also be considered in the manufacturing process, (e.g., Calculation of Yield, Tamper Evident Packaging, Stability Testing, etc.)

The clarity that Part 4 has provided may seem expensive and daunting from a Quality Systems perspective. However, revamping or completely overhauling an existing system is not necessarily required in light of the new requirements of Part 4. Compliance can be achieved by assessing an organization’s existing Quality Plan and simply filling in any gaps.



Testing, Commissioning, Validation, Qualification, or Verification? A Brief History - Part 2

We left off in Part One, with spiraling costs for commissioning and qualification activities. Some companies, under regulatory pressure, would perform a full IQ/OQ for commissioning, then a “dry-run” IQ/OQ, and then the “real” IQ/OQ. That is three full qualification efforts! Even with all this expensive paperwork, the startup experience was rarely improved. Generating, approving, and executing all of these documents was so difficult and time consuming, that memos and exceptions were heavily employed to release systems to the process validation group on time. Instead of just letting the experts run the equipment, and then make the necessary adjustments and repairs, there was an army of protocol writers, often with little true start up experience, running around routing documents. The hoped for solution to this state of affairs was to be found in the Risk-Based Approach.

In the life sciences industry, the term “risk based” came into existence when the FDA released its Pharmaceutical cGMPs for the 21st Century - A Risk-Based Approach in the Fall of 2004. The only use of that term was in reference to how the FDA was going to prioritize its own inspections. There is no mention of using a risk based approach for equipment qualification. I have no idea who was first to use the term “risk based” in conjunction with equipment C&Q, but it took off like wild fire.  Industry saw the risk based approach as an opportunity to reign in the high cost of C&Q. To everyone’s surprise, cost went even higher.

Costs rose because in the conservative pharmaceutical industry, no one wants to go out on a limb and declare something is “low risk”, therefore not requiring documentation. What happened was that in addition to all the standard testing documentation, a risk assessment and pFMEA were now added to the list of required documents. To compound matters even more, quality assurance was borrowing heavily from GAMP with regards to equipment, utilities, and facilities. In 2005, for a simple tank, one might have been required to generate the following documents:

  • Project Validation Master Plan
  • Risk Assessment
  • FMEA Assessment
  • User Requirements Specification
  • Functional Specification
  • Design Specification
  • Factory Acceptance Test
  • Site Acceptance Test
  • Installation Qualification
  • Operational Qualification
  • Trace Matrix
  • Summary Report

That is 12 documents for one tank! A medium sized capital expansion project could easily require 400-500 documents. All individually authored in MS Word, with little or no engineering information on-hand (to get a jump on the schedule). It’s a recipe for disaster.

ASTM Committee E55 was formed in 2003 to help bring a more rational approach to Qualification of

Biopharmaceutical and Pharmaceutical Manufacturing Systems. By 2006 a draft document was released to industry. It was fairly radical. A final, much watered down version was officially released in August of 2007. As of late the 2013 the ASTM approach is still struggling to gain acceptance within the industry.

In Part 3 of this series, I’ll describe the intent and reality of ASTM E2500 along with some practical advice   implementing an ASTM 2500 risked based approach to qualification of Biopharmaceutical and Pharmaceutical Manufacturing Systems.


Risk Management

Risk management is a growing discipline within the Life Sciences industry.  With the introduction of FDA’s 21st century GMP and ICH initiatives (such as Q8 Pharmaceutical Development, Q9 Quality Risk Management, and Q10 Pharmaceutical Quality System), drug manufacturing entered a new era of risk management. There is now an expectation that organizations have a high level risk management policy in place document that defines the following:

·         areas where applicable

·         methods and tools to be used

·         associated responsibilities of both management and individuals

·         clearly stated ownership of risk decisions

·         how is risk documented and controlled

·         risk review and communication methods and timing

·         standard guides on risk ranking and acceptance

·         risk management training and resourcing

 The system is expected to support the overall risk management process as depicted by the diagram below.

Fig 1: ICH Q9 Risk Management Process Diagram

Risk Management Flow Chart

Additionally, lower level tactical procedures such as deviation management, investigations, complaints, change control, validation, computer systems, premise / equipment design & operation, supplier management, annual reviews and sampling are expected to be in place and to show that risk management is embedded into these functions (Mollah, Long & Baseman, 2013). 

Then there are the expectations corresponding to individual risk assessments, which are more detailed and tie back to the effectiveness of the policies and procedures that have been implemented. In the next post in this series, I will describe in detail what goes into a good individual risk assessment.


Experienced Workers Deliver

There was a fantastic post on the Harvard Business Review Blog Network today. It validates everything we have been preaching for years here at Automated Systems Inc.  The article takes American business to task for its false assumptions about well educated, older workers. The article states:

Then there’s a set of employer beliefs about workers who have significant work experience and who have attained higher levels in their former organizations. Here employers worry that the worker may expect a higher salary than younger workers, or may be unhappy taking a position that pays less or comes with less responsibility than their prior job, and as a result will look to leave at the first opportunity or be otherwise disgruntled.


This set of beliefs is not directly about age. But it is almost always older workers who are trapped by perceived “over qualification.”

In ASI experience this is often not the case. Gretchen Gavett, the article’s author agrees.

Importantly, while some employers fear that older workers will not stick around, my research suggests the opposite is more likely.  It’s worth considering whether, in fact, it is younger workers in their 20s and 30s who are more likely to be actively searching for opportunities to move across jobs in an effort to develop a portfolio of marketable skills and experiences. Older workers are really looking for a company where their considerable skills and experiences are valued and can make a difference.   

ASI has made it a core competency to search out the experience and dedication that older workers can provide to our clients in the life sciences industry. I shake my head at the talent that is left underemployed because of false stereotypes. ASI capitalizes on these misconceptions time and time again. At ASI we focus on providing our clients resources who are experienced, knowledgeable, and know how to execute complex engineering, compliance, and automation projects. We have found the age positively correlates with these attributes.


Choosing the Correct Storage Method in Wonderware Historian

The most frequent request I seem to get from our customers, when creating a new Wonderware Historian tag, is to describe the different storage methods that could be selected for the tag.  In this blog entry, I would like to explain the available storage options for tags and the various situations where one or the other storage method should be used.

However, before we dive into the technical aspects of creating a new tag, please refer to the diagram below for a quick review of the Wonderware Historian architecture. 




The Historian consists of several programs, also called subsystems, which interact with each other to comprise the Historian.  The Data Acquisition Subsystem is responsible for communicating with the I/O Server to read real world values from the manufacturing process.  The Data Storage Subsystem receives the values from the Data Acquisition Subsystem and saves them to the Historian Database.  As we will see in more detail below, there are several options for saving these values to the database.  Finally, the Data Retrieval Subsystem provides the means for a user to query the Historic Data.  There are several retrieval methods available for reading the historic data, and I will describe those options in a follow-up blog entry.

When a new historian tag is created to save a real world value, such as a tank level or product temperature, the data storage method has to be defined. This will, specify how the tag’s value is saved.  The choice of storage method affects the number of values written to disk and the resolution of the data that will be available for later retrieval.

The following storage methods are available when creating a new tag:

  • No data values are stored
  • All data values are stored (forced storage)
  •  Only changed data values are stored (delta storage)
  •  Only data values that occur at a specified time interval are stored (cyclic storage)

No Data Storage

Selecting the no data storage option is very useful in situations where the Historian’s open architecture is used to provide current production values, and there is no need to retain the historic data.  For example, we have crated tags with no data storage options to provide live or current values for inventory control applications.  In those cases the external application was requesting current tank levels at the beginning of each shift.

Forced Storage

The Forced Storage option saves each data value as it is read from the real world.  This option is useful for those situations when the I/O server is reading real world values by exception.  That is, each time the value changes, this method will save the new value. 

If the I/O Server is set up to poll the value at a given frequency, instead of exception, then this method will save the value at the polling rate defined in the I/O Server.  In that situation, the Forced Storage will be identical to the Cyclic Storage method, with the advantage that higher than 1 second data save rate can be achieved, since 1 second is the fastest data save rate available for Cyclic Storage.   

We use the forced storage options in situations where it’s critical that every value is recorded.  For example if a packaging line’s scale is providing the weight of each product, then using this option ensures that every product weight is recorded.  Since the Historian also captures the date- and time-stamp with the value, we also have the added information as to when that product was weighed. 

Delta Storage

The Delta Storage option, also called storage by exception, saves data based on a change in the tag’s value.  To decrease the resolution of tag values saved, the following filters or deadband options are available:

  • Time Deadband
  • Value Deadband
  • Rate of Change (swinging door) Deadband

The Time Deadband defines the minimum time value that needs to elapse before the next change in value will be saved.  Any change in the value within this time period will not be recorded.  A zero deadband value indicates that the system will save each value as it changes.

The Value Deadband defines the percentage amount the tag’s value must change before it will be saved.  Any change in value that is less than the specified deadband will not be recorded.  A zero deadband value indicates that the system will save each value as it changes.

The Rate of Change Deadband defines the percentage of deviation, or the rate at which the tag value must change, in order to save the tag’s value.  The Rate of Change Deadband can be used in conjunction with the Time and Value Deadbands to further refine to storage resolution.  A value of zero indicates that the Rate of Change Deadband will not be applied.

We use the delta storage option in situations where the tag value stays constant for longer periods of time such as for a tank level, and there are high fluctuations or chatter in the value.  Using different deadband options, the chatter can be filtered out and only true changes in the tank level are recorded.  Using this option significantly reduces the volume of data that is generated, compared to a situation where every change is recorded.  

Cyclic Storage

The Cyclic Storage option saves data based on a defined time interval, or storage rate.  With this option, tag values are saved at the defined time interval, but only if the data value changed during the time interval.  The storage rate selected needs to be at the I/O server polling rate or slower.  There is no sense of saving values faster than available from the real world.  On the other hand, if the storage rate is too slow, changes in the tag’s value might be missed.

The following predefined storage rates are available to choose from:

  • 1, 2, 3, 5, 6, 10, 15, 30 seconds
  • 1, 2, 3, 5, 6, 10, 15, 20, 30 minutes
  • 1 hour

In this blog entry, I described the different storage options available in the Wonderware Historian.  In the follow-up blog entry, I will address the Data Retrieval Subsystem, and the available options to query historic data.  An interesting side note to consider:  the choices for querying historic data are independent of the selected storage option, and the Historian will fill in missing tag values during data retrieval.