Maize block "M" and "Multidisciplinary Design Program, University of Michigan"
  • About
    • About MDP
    • MDP Team
    • Student Staff
    • Contact Us
    • Join our Mailing List
  • Students
    • Start here!
    • Faculty Research Overview
    • Industry Sponsored Projects Overview
    • Team Resources
    • Academic Advising
    • Academic Credit
    • MDP Minor
    • Student Highlights
  • Faculty
    • Advance Your Research
    • Faculty Research Teams
    • Mentor a Faculty Research Team
    • Mentor an Industry Sponsored Team
    • Faculty Partners
  • Events
    • All
    • Design Expo
  • Sponsors
    • Partner With Us
    • Corporate Highlights
  • Projects
    • 2025 Projects
    • Archived Projects
  • Apply
    • How To Apply
    • Application FAQ
    • Info Sessions
    • Review Projects
    • Project Fair
    • Experience & Interest Form
    • Video Interviews
    • Application Help Sessions
    • Join the Waitlist!

Northrop Grumman GATE -24

Back to Search

Apply

  • Overview
  • Student Skills
  • Mentors
  • More Information

This project seeks to leap an order of magnitude forward in the domain of software code quality via automated testing. Students on this team will employ Generative AI trained on domain-relevant tests, and continuously educated based on user selection or rejection of generated output, to continuously create system, interface, and unit tests for a complex system.

Abstract:

Northrop Grumman solves the toughest problems in space, aeronautics, defense, and cyberspace to meet the ever-evolving needs of their customers worldwide. They are defining possible every day using science, technology, and engineering to create and deliver advanced systems, products, and services. In support of a set of important space missions, Northrop Grumman has established a software product center, creating a common set of applications and services to support and execute mission activities. This complex system requires a significant body of automated tests to maintain continuous system performance as the product continues to evolve, improve, and scale. This set of more than 60 applications and services, in several different languages, deploys across an air gap for isolation from network access. Students on the Northrop Grumman GATE team will create an off-line generative AI code testing engine that reads the codebase and fills a software testing pipeline for Northrop Grumman engineering review.

We will employ Generative AI trained on domain-relevant tests, and continuously educated based on user selection or rejection of generated output, to continuously create system, interface, and unit tests for a complex system. We will present these tests to the development team for acceptance in either the CI/CD pipeline, inclusion in another recurring test activity, or rejection back to the GATE software to help refine future generated products. In doing so, we will drastically increase code quality by significantly increasing the set of relevant tests in our pipeline.  The tool will be written in Python and popular libraries and frameworks; for example, TensorFlow, Leapfrog AI, and PyTorch.

       

Impact:

The critical missions supported by the software product GATE will operate on will benefit from stability and resiliency increases. Northrop Grumman’s customers will benefit from decreased costs made possible by more expedient discovery of potential issues earlier in the process. Northrop Grumman’s team will benefit from an efficiency increase resulting from higher code quality, freeing team members to focus their energies on fielding the challenging, nationally critical capabilities we are entrusted with.

Scope:

Minimum Viable Product Deliverable (Minimum level of success)

  • Full literature and technology review incorporating industry best practice, academic literature, patent materials, and internal best practices from the sponsor
  • Collect and understand available data, business processes, and specific test cases.  Prioritize a list of test cases
  • Prototype air-gapped generative AI capable of generating simple automated test cases
  • Collect feedback from key stakeholders

Expected Final Deliverable (Expected level of success)

  • Fully functional generative AI code test engine that is integrated into Northrop Grumman’s software testing system.

Stretch Goal Opportunities: (High level of success)

  • AI-driven codebase analysis looking for “weak points” in the areas of cyber-security, performance, stability, and maintainability.
Below are the skills needed for this project. Students with the following relevant skills and interest, regardless of major, are encouraged to apply! This is a team based multidisciplinary project. Students on the team are not expected to have experience in all areas, but should be willing to learn and will be asked to perform a breadth of tasks throughout the two semester project.

Machine Learning Fundamentals (4 Students)

Specific Skills: A solid understanding of machine learning principles, including supervised and unsupervised learning, neural networks, optimization algorithms, and back propagation, as well as practical application of these techniques 

Experience with deep learning architectures, such as Generative Adversarial Networks and Variational Autoencoders, is desired.

EECS 281 (or equivalent) is required.  Priority will be given to students who have completed machine learning coursework

Likely Majors: CS, ROB, CE, EE

Programming and Software Development (2 Students)

Specific Skills: General programming and software development skills 

EECS 281(or equivalent) is required

Strong skills/experience (or willingness to quickly learn) with Python and popular libraries and frameworks; for example, TensorFlow, Leapfrog AI, and PyTorch

Likely Majors: CS, DATA, CE

Data Handling (1 Student)

Specific Skills: Skill in efficiently managing and preprocessing large datasets, understanding of database design and implementation

EECS 281 (or equivalent) is required

Likely Majors: CS, DATA

Additional Desired Skills/Knowledge/Experience

  • Knowledge and interest in current developments within AI, including staying up-to-date with the latest research papers, attending conferences, and being actively involved in the AI community
  • Domain-specific knowledge in areas like automated software testing
  • Awareness of the ethical implications of AI development, and interest in and promotion of responsible AI use.
  • Strong linear algebra and calculus skills, with an understanding of the theory basis for AI
  • DevSecOps best practices and common tools for automated testing (especially TestComplete, Robot Framework, Jenkins, GitHub, SonarQube)
  • Demonstrated examples of outstanding creativity and problem-solving
  • Strong skills in probability and statistics
  • Specific experience with Generative Adversarial Networks (GANs) or Variational Autoencoders (VAEs)
  • Strong skills/experience (or willingness to quickly learn) with Python and popular libraries and frameworks; for example, TensorFlow, Leapfrog AI, and PyTorch
  • Practical experience with data handling and preprocessing of large data sets
  • Knowledge of how to evaluate and compare different generative models is essential to determine their performance and identify areas for improvement
  • The ability to analyze results, identify shortcomings, and iterate on models

Sponsor Mentor

David Black

Executive Mentor

Headshot of Zachary Greenberg

Zachary Greenberg

Zach graduated from University of Michigan with an Aerospace Engineering degree. He began working for Orbital Sciences Corporation as a propulsion engineer, then a systems engineer, then a system architect, program manager, and now program director. Over that time, he also earned a Master’s degree in Systems Engineering from Stevens Institute of Technology and an MBA from George Washington University. Zach’s experience spans from engineering analysis to loading propellant into satellites, to architecting first-of-a-kind systems, to leadership of software organizations at scale. He enjoys fine wine, indoor rowing, watching Michigan football, going to the beach, and eating gummy bears (though not all at the same time).

Faculty Mentor

Headshot of Jeff Ringenberg

Jeff Ringenberg

Electrical Engineering and Computer Science

Research Interests: Mobile learning software development, tactile programming, methods for bringing technology into the classroom, and studying the effects of social networking and collaboration on learning.

Weekly Meetings: During the winter 2024 semester, the Northrop Grumman GATE team will meet on North Campus on Tuesdays from 3:30 – 5:30 PM.

Work Location: Most of the work will take place on campus in Ann Arbor. Students on this team will be invited to join Northrop Grumman sprint demo meetings remotely every-other Wednesday from 9:30 am to 11:30 am, and sprint read-outs at 4pm on the same day, in addition to project meetings. This activity will count towards weekly effort.

Course Substitutions: CE MDE, ChE Elective, CS Capstone/MDE, EE MDE, CoE Honors, SI Elective/Cognate

Citizenship Requirements: This project is open to US Citizens only, because of the requirement to participate in biweekly sprint planning within an organization whose Government-mandated security restrictions so-require.

IP/NDA: Students will sign IP/NDA documents that are unique to Northrop Grumman.

Summer Project Activities: No summer activities are currently planned.

Learn more about the expectations for this type of MDP project

[email protected]
(734) 763-0818
117 Chrysler Center

© University of Michigan

QUICK LINKS

Home

About Us

Projects

Events

Advising

Contact Us

SOCIAL MEDIA

  • Follow
  • Follow
  • Follow
  • Follow