Software testing is a critical activity in the software development life cycle for quality assurance. Automated Test Case Generation (TCG) can assist developers by speeding up this process. It accomplishes this by evolving an initial set of randomly generated test cases over time to optimize for predefined coverage criteria. One of the key challenges for automated TCG approaches is navigating the large input space. Existing state-of-the-art TCG algorithms struggle with generating highly-structured input data and preserving patterns in test structures, among others. I hypothesize that combining multiple tribes of AI can improve the effectiveness and efficiency of automated TCG. To test this hypothesis, I propose using grammar-based fuzzing and machine learning to augment evolutionary algorithms for generating more structured input data and preserving promising patterns within test cases. Additionally, I propose to use behavioral modeling and interprocedural control dependency analysis to improve test effectiveness. Finally, I propose integrating these novel approaches into a testing framework to promote the adoption of automated TCG in industry.