Creating question-answering systems that can interact in a way similar to humans is considered one challenging task belonging to the field of Conversational AI. A domain of Conversational AI is Machine Reading Comprehension, which involves the interaction of a user and a system. The system has a text hidden by the user, and is asked to respond to questions posed by the user. When the user requests a lot of questions from the system, creating a dialogue, the problem is termed Conversational Machine Comprehension (CMC). Ellipses and anaphora are common issues that might appear and should be addressed in dialogues. Therefore, for the task of CMC, questions cannot be processed without considering the whole dialogue history.
In my thesis, a transformer architecture called Global History Reasoning, which manages to model the whole dialogue history, is used for the Question Answering in Context (QuAC) dataset. Also, adapters that can make the training procedure more parameter efficient were explored. With the use of adapters, the parameters of the trained model are reduced drastically, and as a result, the model can be stored in significantly less space. This project allowed me to work with state-of-the-art architectures used in Natural Language Processing and gain a lot of knowledge in this field.