Methods for multiple treatment comparisons – Annual Meeting Short Course

Half-day course: Sept 30, 2015, 12:00 -4:00 pm

Target audience: MS- and PhD-level quantitative researchers

Registration is now open – Course is FREE to all

Instructors

Laura Hatfield, PhD
Department of Health Care Policy – Harvard Medical School
hatfield@hcp.med.harvard.edu

Sherri Rose, PhD
Department of Health Care Policy – Harvard Medical School
rose@hcp.med.harvard.edu

Abstract

Randomized 2-arms trials remain the primary source of information establishing medical device effectiveness. Yet in many clinical settings, clinicians and patients choose among multiple possible medical devices. Comparative effectiveness research addresses this gap. To use observational data, we apply causal inference techniques to address confounding. In this course, we present modern statistical techniques that produce consistent evidence to support clinical decision making from among multiple medical device treatment options.

Learning objectives

At the end of the course, participants will be able to

  1. Describe a clinical decision process in terms of the applicable patient population, set of treatment options, treatment assignment mechanism (including all relevant confounders, both observed and unobserved), and clinical outcomes
  2. Identify inferential targets relevant to the desired clinical decision process
  3. Specify causal and statistical assumptions required for the inferential target to be valid
  4. Choose an estimation method that is feasible given the available data, decision process, and required assumptions

Topics

  • Essential methods elements of clinical decision problems
  • Two clinical examples
    • claims data for implantable cardiac devices
    • registry data for stents
  • Inferential targets
    • all pairwise comparisons (device effect on “treated” or average device effect)
    • posterior rankings of device effects
    • marginal structural model
  • Causal and statistical assumptions
  • Estimation methods
    • balancing scores
    • device effect estimates

Instructor Bios

Laura Hatfield, PhD is an Assistant Professor of Health Care Policy, with a specialty in Biostatistics. Dr. Hatfield received her BS in genetics from Iowa State University and her PhD in biostatistics from the University of Minnesota. Her research focuses on trade-offs and relationships among health outcomes. In particular, she develops and applies statistical methods that incorporate multiple sources of information and relationships among outcomes. Dr. Hatfield has particular expertise in Bayesian hierarchical modeling and has taught short courses in industry, government, and professional society meetings.

Sherri Rose, PhD is an Assistant Professor in the Department of Health Care Policy at Harvard Medical School. Dr. Rose received her BS in statistics from The George Washington University and her PhD in biostatistics from the University of California, Berkeley. Broadly, Dr. Rose’s methodological research focuses on nonparametric estimation in causal inference and machine learning for prediction.  Within health policy, Dr. Rose works on risk adjustment, health care program impact evaluation, and comparative effectiveness research. Dr. Rose has taught short courses on robust estimation in causal inference and comparative effectiveness for varied audiences.

Methodology Forum March 4, 2015

March 4, 2015 (2:00-4:00pm EST)

Meeting Contact: wood@hcp.med.harvard.edu

Agenda

2:00 – 2:15 Objectives/Logistics

2:15 – 2:30 Introductions

2:30 – 2:45 Case Study 1: Matthew Brennan (Duke)

2:45 – 3:00 Discussion

3:00 – 3:15 Methods Study: Laura Hatfield (Harvard)

3:15 – 3:30 Discussion

3:30 – 3:45 Next Steps

Summary

  • Forum meetings planned on a quarterly basis
  • Summary of meeting posted on public website
  • Form collaborations
  • What can this group do:
    • Write white papers/participate in public forms
    • Identify common problems and propose solutions
      • Prioritize gaps in key methodological areas
    • 3 problems identified by Dr. Brennan:
      • multiple treatments,
      • missing data,
      • multiple comparisons
    • Dr. Hatfield:
      • Learning curve issues: how handled in post-market setting?
      • Make better use of realistic loss functions (enumerate actions that industry may face, that patients may face, that a regulatory agency may face)

Meta-analysis of rate ratios with differential follow-up by treatment arm: inferring comparative effectiveness of medical devices

Journal Statistics in Medicine
Authors Kunz, Laura; Normand, Sharon-Lise; Sedrakyan, Art
Year Published 2015
Link to article

Abstract

Modeling events requires accounting for differential follow-up duration, especially when combining randomized and observational studies. Although events occur at any point over a follow-up period and censoring occurs throughout, most applied researchers use odds ratios as association measures, assuming follow-up duration is similar across treatment groups. We derive the bias of the rate ratio when incorrectly assuming equal followup duration in the single study binary treatment setting. Simulations illustrate bias, efficiency, and coverage and demonstrate that bias and coverage worsen rapidly as the ratio of follow-up duration between arms moves away from one. Combining study rate ratios with hierarchical Poisson regression models, we examine bias and coverage for the overall rate ratio via simulation in three cases: when average arm-specific follow-up duration is available for all studies, some studies, and no study. In the null case, bias and coverage are poor when the study average follow-up is used and improve even if some arm-specific follow-up information is available. As the rate ratio gets further from the null, bias and coverage remain poor. We investigate the effectiveness of cardiac resynchronization therapy devices compared with those with cardioverter-defibrillator capacity where three of eight studies report arm-specific follow-up duration. Copyright © 2015 John Wiley & Sons, Ltd.

SMART Medical Device Informatics

SMART Medical Device Informatics Think Tank Meeting
Improving device data capture in electronic health information
A Specific, Measurable, Achievable, Results-oriented Time-bound Collaboration

February 24-25, 2015
Agency for Healthcare Research and Quality Headquarters
Rockville, MD

Preliminary Summary

February 24
Improving structured device data capture: The current landscape
8:00am-5:00pm

8:00-8:30 am – Continental Breakfast

8:30-8:45 am – Welcome & Introductions

8:45-10:00 am – Plenary: Building a SMART medical device data capture infrastructure
Purpose of this Session: To highlight key National initiatives that will inform prioritization of Think Tank projects and set a common vision and goals for expected outcomes over the two-day session

Sharing Knowledge to Improve Device Data Capture
Purpose of the Use Case Sessions: To review the use case areas and specific vignettes that provide the basis for the Think Tank Day 2 demonstration project discussions. Knowledge sharing sessions will be used to discuss and highlight reason for initiating project, impact on cost, quality and patient outcomes, best practices, common barriers, and elements of these projects to incorporate into day two demonstration projects and action plans.

10:00-11:00 am – Use Case One – Capturing device attributes
Session Leader: Mike Schiller
Description of Use Case: Capture and exchange of core device identification information within the healthcare provider environment for use as a master source to consistently identify the device in all downstream systems and processes.

11:00 am-12:15 pm – Use Case Two – Making the link between the patient and device
Session Leader: Natalia Wilson/Joseph Drozda
Description of Use Case: Link of structured device and patient data during a procedure into a patient-device data capture system including ability to exchange data with other systems that support linkage between procedure, device and patient.

12:15-12:45 pm – LUNCH (Demonstration Project Posters/Demos/Reviews)
Purpose of this Session: Use time to network and to view/discuss posters that outline National and International device ecosystem and demonstration efforts – comparison of different options.

12:45-1:30 pm – Use Case Three: Obtaining structured device data from implantable device output
Session Leader: Terrie Reed
Description of Use Case: Exchange of device information from device itself to/from patient-device data capture systems.

1:30-2:30 pm – Use Case Four: Extracting structured device data collected in previous uses cases for downstream system use
Session Leader: Marc Overhage
Description of Use Case: Input and exchange of patient and device information from source systems to a secondary source (e.g. Registry, AE Report or Clinical Trial). This may/may not include addition of supplemental data for purpose of populating secondary source.

2:30-2:45 pm – BREAK

2:45-3:45 pm – Environmental Review: Opportunities for device data engagement
Session Leader: Ed Hammond
Purpose of this Session: To provide overviews of related health data capture and downstream analysis initiatives and explore opportunities for engagement of device stakeholders and use cases in these efforts.

3:45-4:30 pm – Panel discussion of obstacles/lessons learned; Preparation for day 2
Session Leader: James Tcheng
Purpose of this Session: To summarize common themes heard in project summaries and help set priorities on the most important issues to address in Day 2 breakout sessions. Set expectations for Day 2 outcomes and ideas for path to success including envisioning the end to end environment.

4:30-5:00 pm – Preparation for Day 2

  • Review purpose of Day 2 breakout sessions and solicit comments
    • Distribute action plan ideas and common questions to be discussed
    • Define expected outcomes
    • Ensure that initial session assignments are known
    • Feedback to Session Leaders

February 25
Development of Action Plans for Pilot Projects
8:30 am – 3:30 pm

Facilitators for Overall Group Discussions – Mitchell Krucoff, Karen Conway

8:30-9:00 am Overview of Day’s Exercises
Summary of Day 1 – Review of Project/Issue Matrix: Participants will initially meet in a main room where there will be a recap of Day 1 Common Issues/Lessons learned and sharing of any last minute changes to the Breakout Session format.

9:00-10:30 am Use Case Discussions – Breakout Session Rooms by Use Case (see nametag for assignment)

Purpose of this Session: Provide time for interested parties in each Use Case area to discuss their projects in more detail and prepare Use Case Worksheets used as a reference for development of pilot project plans.

Breakout Session Leaders and scribes will be assigned to assist participants to meet the goals of the use case session. Leaders and scribes will receive session guides to assist in achieving objectives and preparing for development of action plans.

10:30-10:45 am – BREAK

10:45-11:30 am – Group Session: Discussion and Action Plan Area Assignment
Purpose of this Session: To review potential project areas and compile learnings from use case discussions to define collaborative grouping and begin work on pilot project plans. Participants will work with session leaders to select area of Interest. Potential areas for consideration:

Action Plan Area #1

PREREQUISITES: Define and resolve what is needed at the foundational level to facilitate generalizable medical device data capture and exchange across the healthcare ecosystem – from time of device manufacturer to time device is removed from patient or patient dies. Examples include:

  • Access to Master Device Identification Data
  • Infrastructure to support Device Data Capture and Exchange (bar code readers, codesets like SNOMED-GMDN, requirements for adding UDI/device identification fields in HIT systems)
  • Development of business value propositions for C-Suite support for demonstration projects
  • Funding source support for pilots
  • Vendor support for demonstrations – Consortium as means to provide common requirements

Action Plan Area #2

FLOW: Create a pilot plan to demonstrate that device information can be captured and exchanged from ‘barcode to bedside and beyond’ that can easily iterate, be generalizable and inform long term device analysis and evaluation of performance.

  • Include representatives from multiple stakeholder groups
  • Define business, clinical, and technology requirements – high level
  • Starting w/ existing projects, identify where current project work is happening or is already planned to happen
  • Identify opportunities to develop new projects that will advance current work and/or fill unmet needs

Action Plan Area #3

EVALUATION/ANALYSIS: Advance the capacity to leverage/exploit/analyze the data that’s available from medical devices for use in public health surveillance.

  • By device type, what data is available? What standards are available
  • What additional data must be leveraged such that analysis for public health surveillance can be performed
  • Where does that data come from (assuming it exists)
  • What tools and technologies are needed to create and use the resulting analysis datasets

11:30 am-12:30 pm – Form into Initial Action Plan Groups

  • Identify Potential Collaborators
  • Scope of Pilot to be defined
  • Begin planning discussions

12:30-1:15 pm Lunch

1:15-2:30 pm Commitment to Action Planning

  • Complete pilot project plans including proposed timelines and commitments to the work.

2:30-3:00 pm Report out of Action Plans by each Group

3:00-3:30 pm Final Summary of Proposed Action Plans and Next Steps

 

OHRP Posts Correspondence Related to the Application of 45 CFR part 46

OHRP has posted its correspondence with  the director of a national health registry in letters dated August 11, 2011 and December 29, 2011 responding to questions about the application of 45 CFR Part 46 to the activities related to the registry, in the belief that others may find the content to be useful.  The letters clarify the following points:

•    If research conducted by a registry is not part of or supported by HHS, or covered by an HHS federalwide assurance, then the regulatory requirements of 45 CFR 46 do not apply to that research activity even if it would be considered nonexempt human subjects research under those regulations.

•    The application of the regulations to an activity depend in part on whether the activity meets the regulatory definition of “research,” which depends on the specific facts of the activity, and not whether it is labeled “quality improvement” or something else.

•     A research registry could be designed so that the regulations would not apply to the creation and operations of the registry through various mechanisms, including the use of codes instead of identifiers in the original release of data to a registry, or the use of computer programming to merge identifiable data-sets without any person being able to view the data in identifiable form.

•    Institutions holding information originally obtained for clinical or administrative purposes whose agents simply release identifiable private information to a registry are not engaged in any research conducted by the registry, and do not have to meet any regulatory requirements of the 45 CFR 46 in this regard

•    Outside researchers who request the release of non-identifiable private information from the registry for secondary research analyses are not conducting “human subjects” research, and therefore the regulations do not apply to this activity, and there is no requirement for either IRB review or informed consent.

•    If healthcare providers enhance or extend their standard of care in follow-up interviews with their patients and those changes would have been implemented regardless of any secondary research purpose, then the data collected through those interviews would not be considered research; in contrast, if part of the reason for the change in interview data collected is for research, then the data collection would be considered part of a research activity.

•    Where appropriate, OHRP supports the use of single or central IRB review and approval of research conducted by clinical registries in circumstances where more than one institution is engaged in the research.

OHRP notes that the activities of such registries may also need to meet requirements under the Health Insurance Portability and Accountability Act (HIPAA), administered by the Office for Civil Rights (OCR). OHRP encourages institutions with questions about the HIPAA regulations to contact OCR directly, at (800) 368-1019.

OHRP is working to provide helpful information to institutions and the public regarding the applicability of the regulations to the various kinds of activities carried out by health registries and the institutions involved in some way in those activities. OHRP has asked the Secretary’s Advisory Committee on Human Research Protections (SACHRP) to provide recommendations related to this topic, and continues to develop information that can be used to protect human subjects in research and avoid unnecessary confusion and administrative burden.

The full text of OHRP’s August 11, 2011 and December 29, 2011 correspondences can be accessed at: http://www.hhs.gov/ohrp/policy/Correspondence/correspondence_regarding_the_application_of_45_cfr_part_46_to_the_activities_related_to_a_national_health_registry.html