There is a growing concern about misinformation or biased information in public communication, be it in traditional media or social forums. While automating fact checking has received a lot of attention recently, the problem of fair information is much larger and much more fundamental. It includes insidious forms like biased presentation of events and discussion and their interpretation. To fully analyse and the problem, an interdisciplinary approach is called for. One needs tools and techniques from Linguistics, to study the structure of texts and the relationships between words and sentences, from Game and Decision Theory, to study the strategic reasoning built into the presentation of texts and their individual interpretation and also from Machine Learning and AI, to automatically detect biased text and develop algorithms to de-bias them.The SLANT project aims at characterising bias in textual data, either intended (eg. in public reporting), or unintended (eg. in writing aiming at neutrality). An abstract model of biased interpretation will be complemented and concretised using work on discourse structure, semantics and interpretation. We will find relevant lexical, syntactic, stylistic or rhetorical differences through an automated but explainable comparison of texts with different biases on the same subject. This will be based on a dataset of news media coverage from a diverse set of sources. We will also explore how our results can help alter bias in texts or remove it from automated representations of texts.