House Prices 3D Visualization

We collected almost 200.000 house ads from the Hungarian web. First, we extracted the basic info for each unit and calculated their price per square metre, than we calculated the median price for each district. Finally, we made a 3D theejs visualization with QGIS.


You can find the visualization here.


We used Python for crawling and for data processing – we love BeautifulSoup! We used the fantastic open source QGIS program and its qgis2threejs plugin to visualize our data.


Geo search is one of the hottest topics in search right now, and well, Precognox is specialized in NLP and search, so we think it’s high time to get our hands dirty with geo data. Housing is a big issue everywhere in the world and we think technology can help to understand it and it could help to come up with solutions (yes, we are idealists).

Analyzing discourse on recent issues in Hungary


The Hungarian government passed a restrictive bill against the Central European University. This was followed by a storm of social media posts and protesters flowed on the streets to express their opinion that was backed by torrents of articles in the media. We are touched by the recent issues in our country and we wanted to see the national and international discourse around it.

Global discourse on lex-CEU

We collected data from Twitter to visualize the discourse (topic models) and to show the geographic distribution of the participants of the discussion.


You can find the visualization of the topics here.

You can find the 3D visualization of the geographic distribution of tweets here.

We used the Twitter API to collect 7822 tweets written in English containing one of the terms ‘CEU’, ‘Central European University’ or the hashtag #istandwithceu. We used the Stanford CoreNLP tool for lemmatization and named entity extraction. Topic modelling was done by the gensim package and the interactive topic model visualization is generated with the pyldavis library. For the interactive globe visualization, we did the same search, which gave us 9745 tweets. We extracted the geo-location data from tweets to map them on Google’s Globe WebGL.


Local discourse on mass protests

After the CEU, the government targeted NGOs with a new proposal which requires civil organizations accepting financial help from abroad to register at the court as a “foregin funded organization” despite the fact that they are already obliged to publish their books as any other NGO in the world. Citizens responded with peaceful mass protests and the online media followed the story closely. However, the pro-government media interpreted the news in a very different way.

We collected articles related to lex CEU, the anti-NGO bill and the protest from four Hungarian news site (two independent: and and two pro-government: and We analyzed 513 articles that appeared between 01.04.2017 and 13.04.2017. We found no significant differences between the two groups at the level of text statistics (lexical diversity and the length of the articles).

Below, you can have a look at the top 150 most frequent words of each site. The word clouds were made by using Processing and the WordCram library.

There are no big differences between the word frequencies. So we examined the keywords of each site with the help of the fantastic AntConc software for corpus linguistics. This word cloud shows that there is a great divide between pro-government and independent media.


We found that the volume of coverage is much less on the pro-government sites (162 vs 351 number of articles). We used Latent Dirichlet Allocation to analyze the topics of the articles and we found that although the two sides covers the same issues, there are two big topics which can be identified by lda due to the different linguistic features of the two groups. While the pro-government media prefers terms like Soros University (CEU), Soros funded NGOs, and foreign actors, the independent media is using a more neutral language and it is using the official names of persons and institutions. You can find our interactive visualization of the topics here.


Topic 1 is mainly about the mass demonstrations against lex CEU and the proposed anti-NGO bill. Surprisingly, this topic contains articles exclusively from and


Topic 2 is also about the mass demonstrations against lex CEU and the proposed anti-NGO bill. However this topic contains articles exclusively from and due to the very different terms.

A walk into the semantic space

Having found dramatic differences between the independent and pro-government media, we were wondering how strong is the difference between the languages used by these sites. We trained a word2vec model on the corpus and plotted its 3D t-SNE projection using the threejs R package to see how the words used in the articles are related to each other.


You can find our visualization here.

We plotted only the most frequent five hundred words from each site. There are commonly used words in the top 500, these words occupy the central part of the plot. It seems that origo has no distinct language, as we can barely see yellow dots on the plot. Given the recent story of the site, we cannot wonder; origo has been bought by a group standing close to the government and most of its staffs left it, so the site recruited new people and started to collaborate sites on the right side of the political spectrum. Although 444 and Index covered the same stories, it seems the two sites developed their own languages.

Follow the narratives, but don’t be a solutionist

Tracking down narratives on an issue and visualizing your findings is super easy in 2017 thanks to the open source community. We love technology and we are happy whenever it can help us to see the big picture. We see there are two narratives on the same topic, but closing the gap between the two groups and starting a rational discussion between citizens is not about technology. There is no app that can help us. We hope that the followers of these distinct narratives can find a common ground and start a discussion in the real life before they lost the ability to understand each other.

Read on

Bill Bishop: The Big Sort: Why the Clustering of Like-Minded America is Tearing Us Apart, Mariner Books, 2009

Eli Pariser: The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think, Penguin Books, 2012

George Lakoff: Don’t Think of an Elephant!: Know Your Values and Frame the Debate–The Essential Guide for Progressives, Chelsea Green Publishing, 2004

Norman Fairclough: Language and Power, 3rd Edition, Routledge, 2014

What we have presented at the Applied Linguistics Conference?

On the 3rd of February, 2017 our company had the honor of participating in the 11th Conference for PhD Students of Applied Linguistics not with one, neither with two, but with three presentations. Martina Szabó, our colleague has recently finished her PhD studies at the University of Szeged in applied linguistics and led multiple researches with our NLP team on the field of Hungarian emotion and sentiment analysis.

Martina, Gergő, Berni and Zsófi

    We presented our findings in three articles: Martina Szabó and Fanni Drávucz wrote about the problem of subjectivity in connection with emotions and sentiments. They were looking for the linguistic signs of uncertainty in our emotion and sentiment corpus and they found that in the emotion corpus there are 2.5 times more linguistic signs of uncertainty, which proves that they are indeed more personal and subjective than sentiments. They also found that the negative emotions and the negative sentiment are more closely connected to uncertainty, which could arise from the more polite and indirect expressing of such emotions or opinions.

    The second article were based on a research where we collected and analyzed a corpus of Hungarian tweets, looking for polarity-changing elements. These lexically negative linguistic items can lose or change their polarity and bear a positive or neutral value as an intensifier. In their research, Martina Szabó, Zsófi Nyíri, Bernadett Lázár and Gergő Morvay analyzed the usage of such intensifiers by male and female Twitter-users, and found that while female users preferred to use them connected to negative adjectives, male users used them more often with positive or negative adjectives and had an overall preference to swear words.

    In an other research, Martina Szabó, Zsófi Nyíri and Bernadett Lázár examined the translatability of negative intensifiers from Russian to English. These linguistic elements are so delicate and complex that their complete meaning is often lost in translation. They analyzed a Russian-English corpus with parallel texts and found that such intensifiers in Russian are often translated into English with a neutral intensifier, partly losing the original meaning, but there is a difference in interpreting negative intensifiers in connection with negative or positive adjectives.

We are really proud of Martina and our NLP team for such a hard work!

2016 in Retrospect

Time flies and the end of the year is coming, so it’s high time to summarize what’s happened to us in 2016.

Precognox in the world


We are participating in the KConnect Horizon 2020 project, that aims to bring semantic technologies into the medical field. We are proud of being a partner in a truly European project!

This year, Precognox visited the new world and build partnership with Basis Technology. One of our colleague spent three months in Boston, MA as the first step of our co-operation.

We are really multilingual, we were working with texts in (Simplified) Chinese Mandarin, Spanish, Arabic, Russian, English, and Hungarian. We gained experience with these languages as part of our projects with Meltwater, the biggest social media monitoring company.

Business as usual

According to the basic law of software development, projects occupy the available resources and more resources mean more projects. Precognox is not an exception to this, our team is growing, so we are managing more and more projects. We are continuously working on large scale Java based software development projects for various customers, just have a look at the list of our customers, and you’ll understand why I mention only one of them here. We are about to start major enterprise search and text mining projects, one of them is upgrading the semantic search solutions developed for  Profession‘s online job search portals. Precognox is working on the backend of Profession’s sites for years, so we literally grow up with it, it taught us a lot about enterprise search, so we are excited about the upgrade.


We have a new product called TAS (Text Analytics System). We had several data collecting and cleaning projects and we distilled our experiences into a new tool. TAS helps you to collect, clean and analyze unstructured data , learn more about it on our website.

For profit, and for the greater good

For years, Precognox employs trainees. Usually, we have software developer and data analyst trainees who are working with us on a part-time basis, and we are welcome students for summer internships too. We are very proud of our former trainees, many of them started their career at top companies, one is doing his PhD in the Netherlands, and many of them is our full-time colleague now. From this September, we are participating in the new collaborative teaching scheme, which means the incoming students spend one-two days at the university as ordinary students and the rest of the week is spent at our company as a full-time employee. We believe that this practice oriented scheme will help students to jumpstart their careers upon graduation.

This year we were working on data driven projects with two NGOs and two research institutions.

We were working on an information visualization dashboard with EMMA (an NGO dedicated to inform and help pregnant women). As part of an European project, EMMA’s volunteers interviewed several women on their experiences during pregnancy and motherhood across the country and we analyzed this data by using various text mining tools. This project helped us to design a workflow for rapid prototyping text mining solutions, you can find projects based on this here and here. We do hope EMMA can use our dashboard for analyzing their data and we can work together on interesting projects in the future.


This summer, we started working with Járókelő, a platform for reporting potholes and other anomalies in the city to the authorities. We’d like to develop a scoring mechanism for the stakeholders.



We are processing public procurement data for the Corruption Research Centre Budapest, and the Government Transparency Institute. Our partners’ research on monitoring procurement related corruption has been featured on The Economist recently.


Precognox is committed to open data, that’s why we published our Hungarian sentiment lexicon on under a permissive licence.


We publish on our research project on Nyelv és Tudomány (Language and Science, a popular scientific online magazine). E.g. we wrote a long article on the media representation of migrants in the Hungarian online media, we published several pieces on the social and ethical questions of AI and big data, and we made style transfer videos for the portal in 2016.

Work should be fun!

While we have lots of projects, we are continuously improving ourselves. That’s why we are organizing the Hungarian Natural Language Processing Meetup since 2012. This year, we teamed up with Meltwater and and took the meetup to the next level. We had six meetings with speakers from industry and academia. Two meetups were held in English with speakers from Oxford, San Francisco (Meltwater), and London (BlackSwan).


Precognox is a distributed company with offices in Kaposvár and Budapest, and team members from Szeged and other parts of the country. Several times a year,  we get together to talk about our projects and just to have a blast. Of course, we are real geeks, so we organized in-house hackathons at these events, and we loved hacking on data projects.


We are addicted to conferences. Every year, we attend MSZNY (the Hungarian Computational Linguistics conference), BI Forum (the yearly business intelligence conference in Hungary) and many more. We are happy to present our research to the public and get feedback from the community. Also, we love sharing our knowledge with others, e.g. this year, we gave a lesson on text mining at Kürt Academy’s Data Science course, and a lesson on content analysis and text mining for master students at Statistics Department of ELTE TATK.


This year, we made lots of dashboards for profit and for help scientific inquiries. Having finished these projects, we felt the need for introspection. Although, we were working hard to show what data tells us, we didn’t use the full potential of data analysis for advancing humanity. We needed a reason to continue our efforts, we needed a new goal. We turned to the Jedi Church for consolation, the church connected us to the Force, and the Force helped us to visualize the Start Wars texts.


We are so artsy

Everything started with a job ad. We were looking for a new intern, we needed a photo for the post that describes the ideal applicant. It seemed to be a good idea to give a try to style transfer and attach an image of our team in the style of Iranian mosaics.


Later, our Budapest unit moved to a new office, so we thought it is a good idea to develop our own decoration for the new place. The results are hilarious, a new typeface (yes!) with characters composed from graphs, and the following pictures.

Finally “SEMMI” (means nothing) got on the wall of our room.


We are very keen on style transfer, so we made videos too.


Having worked with pictures and characters, we needed a new challenge, so we sonified emotion time series extracted from Hungarian news published during the migration crisis.

And now for something completely different

This year we were working hard and playing hard, it’s time to have a short break. Next year, Precognox will start offering new solutions to its customers and exciting new projects to its employees. Stay tuned, we’re going to blog about these!



Is Kenny Baker the Kevin Bacon of Star Wars? Does every movie have a happy ending?

How do we quantify the importance of the nodes in a network? To answer this question mathematicians came up with the so-called Erdős number to show how far someone is from “the master” in a network of publications. Movie-enthusiasts have created the Bacon number as its analogy, based on co-occurrences in movies. But what does it have to do with Star Wars? Which character or actor is the key person in this universe? Is it really true that every blockbuster has a happy ending? We are trying to answer these questions with the revised version of our study carried out last year and hope to find answers with the help of interactive visualisations.

Erdős and Bacon

What is needed to create a new theory in network science? Apparently, a windy winter night is enough when Footloose and The Air Up There are on TV one after the other. And of course three American university students who having watched the movies begin to speculate: Kevin Bacon has played in so many movies that maybe there is no actor in Hollywood who hasn’t played with him yet. Well, probably it is not true, but backing up the theory with a bit of mathematics and research, a new term, the Bacon number has been born.

The Erdős number was defined in 1969 by Casper Goffmann in his famous article ‘And what is your Erdős number?’ It is based on a similar observation about the legendary productive Hungarian mathematician Paul Erdős who had so many publications in his life (approx. 1525 articles) in so different fields, that it was possible and worth classifing mathematicians and scientists based on their distance from Erdős in a network of publications. According to this, Paul Erdős’s Erdős nuber is 0, since he is the origo of this theory. Any scientist who has ever published anything together with Erdős, has the Erdős number 1. Anyone who has published together with someone with the Erdős number 1 will get the Erdős number 2, and so on. Generally speaking, everyone has the Erdős number of the person of the lowest Erdős number they have published with, plus one.

In case of Kevin Bacon and Hollywood the principle is the same, but instead of publications it is based on movies and the connection is not authoring an article with someone but playing in the same movie with someone. It is only a coincidence and a historical legacy that it is called Bacon number, because although Erdős is the most productive mathematician in history with almost twice as many publications as Euler, who came second on the list, Bacon is not really a central figure in Hollywood. If we check the network of actors in Hollywood, the average distance from everyone else is 2.79 in case of Bacon, which is enough only for the 876th place in the ranking. As a comparison Rod Steiger, who is the first on this list, has a value of 2.53.

One Saga, Seven Episodes

But what does it have to do with Kenny Baker? We were wondering who the Kevin Bacon of the Star Wars universe was therefore we collected the cast members of both the original and the prequel trilogy also adding the actors of Episode VII that was released last December. We visualised our findings on an interactive graph. The title – ‘The center of the Star Wars universe’ – is honorary, because the concept of distance related to the Bacon number can hardly be interpreted on this graph. Nevertheless, the prestige value of the origo and the position it occupies within the network can be a valid basis of comparison as well as the definition of the relations based on the co-starring of the actors.

On the visualisation – to make the network more transparent – we only show the actors who played in at least two different Star Wars movies. There is a relationship between two actors if they have starred in the same movie. The more movies the actors have co-starred in, the stronger their relationship is.


Network of actors having played in at least two different Star Wars movies. The interactive version of the graph can be found here.

By clicking on the nodes of the interactive visualisation you can see the number of movies the actors played in, which characters they embodied, as well as the number of their relations. The colors of the nodes correspond to the set of trilogies the actors played in. There is a clear distinction between actors only starring in the original – light blue – and the ones who played in the prequel trilogy – dark blue. This may not be so surprising considering that 16 years passed between the releases of Episode VI and I and 28 years between Episode IV and Episode III.

Naturally, there are actors who connect the two trilogies’ crew, although their number is limited. They are forming the nodes in the center of the network and as for their size they are the biggest ones. This also indicates that these actors have the largest number of relations and the highest number of shortest paths between two peaks. Actors of this group played in both the original and the prequel trilogy (light green nodes), another group of them additionally got roles in Episode VII as well (dark green nodes).

We can also find two additional subgroups on the graph. The light blue one shows the actors playing a key role in the original trilogy and in Episode VII as well. Carry Fisher playing Leia and Harrison Ford playing Han Solo are the most typical representatives of this category. Alec Guinness, who played Obi-Wan Kenobi in the original trilogy, may certainly be the most interesting member of this group. Although he passed away in 2000 he still appears in the credits of Episode VII, due to an archive voice recording. Finally, the only actor appearing in both the prequel trilogy and Episode VII is Ewan McGregor, also with a voice recording – it seems that the latest episode couldn’t decide which Jedi master to favor: the young or the old one.

The Big Four

Let’s take a look from a different angle and see how the actors, according to their characters, are set in the network of the Star Wars universe and who is the luckiest to call himself the origo.

There are four characters altogether who have appeared in all the seven Star Wars movies so far: Anakin Skywalker, Obi-Wan Kenobi, C-3PO and R2-D2. Of course the young and the old Anakin and Obi-Wan are played by different actors, therefore they can’t make it to the very top with the two droids. There was a very close competition between Kenny Baker (R2-D2) and Anthony Daniels (C-3PO), but in Episode VII Anthony Daniels took the leading role, since Kenny Baker was only a consultant in playing R2-D2. This fact however doesn’t affect their roles in the network since both of them appeared in the credits of all seven movies. What is more, they are both versatile actors; they played more than one character – Kenny Baker was also Paploo, the ewok in Episode VI and Anthony Daniels was also Dannl Faytonni in Episode II. Considering the recent death of Kenny Baker however, we decided to claim him the winner of the title ‘Kevin Bacon of the Star Wars universe’ as a posthumous award. (In reality, Anthony Daniels is just as worthy of the title as he is.)

The runner-up is of course Frank Oz, who played Yoda in six of the seven Star Wars movies (in Episode VII with his voice). Actors like Ian McDiarmid playing Senator Palpatine (in Episode V only on the DVD-edition) and Peter Mayhew playing Chewbacca- both actors played in five-five movies- have a distinctive place on the list. Last but not least actors of the original trilogy also appearing in Episode VII, like Carry Fisher or Mark Hamill may claim the third place.

The most universal node of the network is no doubt Natalie Portman playing Padmé Amidala in the prequel trilogy. Her Baker number is of course 1, her Bacon number is 2 and her Erdős number is 5. She did a PhD in Psychology at Harvard and published several papers, earning a decent Erdős number (among the 134 thousand scientists with Erdős number, the median is 5).

Sentimental Scenes

We automatically split the Star Wars movie scripts stored in the IMSDb database into scenes, then we analysed them with the help of Hu and Liu’s sentiment dictionary. The sentiment scores of each scene from all the episodes can be seen in the interactive visualisations below. The bars marked with brighter colors represent scenes with positive sentiment, the darker bars denote negative ones. The deeper a dark bar reaches the more negative the sentiment of a scene is; the higher a bright bar reaches, the more positive its sentiment is. In case of neutral sentiment scores there is no visible bar. If we point our cursor at the visualizations’ bars beside the exact sentiment score we can see the given scene’s location and the top 3 characters as well – i.e. the characters who either played or are mentioned in the scene.

Generally speaking, the episodes of Star Wars can be characterized mainly by negative sentiment – which is especially true for the episodes of the original trilogy (Episode IV, V and VI). The most negative ones are Episode V and VI and the most positive one is Episode II. In Episode VII the distribution of positive-negative sentiments is more similar to the movies of the prequel trilogy. If we look for the indicators of happy ending, we can find them in Episode I, III and V; these movies end with either positive or neutral scenes. Although positive scenes can be found at the end of each movie, based on the script analysis only half of the movies have a ‘happy ending’.


The sentiment scores of the original trilogy’s movies . The interactive version of the graph can be found here.


The sentiment scores of the prequel trilogy . The interactive version of the graph can be found here.


The sentiment scores of Episode VII. The interactive version of the graph can be found here.

Movies are worth analysing from the characters’ point of view as well. To do this another interactive data visualisation lends a helping hand which presents the dialogs between characters in a network format. It also shows which characters play most frequently in the movies and what kind of sentiment is typical when they occur.


The conversation graph of the prequel trilogy. The interactive version can be found here.

The conversation graphs reveal that the dialogs in the original trilogy were more focused and mainly the main characters were involved – several supporting actors didn’t even get an opportunity to speak out. In contrast, the conversations are more equally distributed between the main and the supporting characters in the episodes of the prequel trilogy. This trend can also be seen on the graph of Episode VII. The characters of Anakin Skywalker and Darth Vader are good examples of sentiment changes; since in the first two episodes Anakin equally appears in both negative and positive roles then a shift occurs: in the third episode he takes part in more and more scenes filled with negative sentiment, and after his transition to Darth Vader he appears almost only in negative scenes.


Written by Kitti Balogh, Virág Ilyés, and Gergely Morvay

Sounds of a Story: Sonification of emotion time series extracted from Hungarian news published during the migration crisis

We harvested more than forty-two thousands articles on migration published on the main Hungarian news portals between 27/09/2014 and 11/06/2016. You can find an information visualization dashboard based on the corpus here. This sonification and the accompanying visualization are experimental tools, their sole porpuse is to give you a glimpse into how the emotions related to migration flowed in the online media. If you’d like to know more about the data, use our dashboard. If you can speak Hungarian, you can read our article on

How it’s made

  • Emotion time series were extracted by using our own emotion lexicons.
  • Time series were mapped to midi notes by using the MIDITime Python library.
  • We used the Music21 library for assigning instruments to emotions.
    • distress: Violin
    • joy: Xylophone
    • fear: ChurchBells
    • anger: Woodblock
    • surprise: Bagpipes
    • disgust: Horn
  • The separate MIDI files were merged into a sound file by using LMMS.
  • The video was made by using the ggplot2 R package for plotting the emotion scores for every week.
  • Finally, we used ffmpeg to make a video from the plots and the sonified time series.

Kitti Balogh
Zoltan Varju

Are keyboards changing our thinking? The QWERTY- effect

The QWERTY-effect as a concept first appears in a study of Daniel Casasanto & Kyle Jasmin. In their research paper Casasanto and Jasmin (hereinafter C&J) argue that because of the keyboard’s asymmetrical shape (more letters on the left than on the right when using English, Spanish or Dutch keyboards) letter combinations that fall on the right side of the keyboard tend to be easier to type than those on the left. Therefore words dominated by right-side letters subtly gain favor in our mind and are regarded as more appealing.


What C&J say is that the position of the keys and the emotional valence of the words are related. This effect may be even stronger in case of words coined after the 60s.


Well, so much about theory.

The researchers went even further by suggesting that if people tend to favor the positive side of the keyboard it may influence parents when picking names for their babies.


Language Log “made mincemeat” of this theory and actually ripped the whole article and the QWERTY-effect apart practically questioning every single sentence while examining and statistically analyzing the data on the same corpora. They didn’t find any significant effects but they came up with lots of interesting questions like: why should the 60s be the dividing line for name giving tendencies? This phenomenon could be studied on a wider spectrum. The blog did exactly the same and found that the name preference discovered by C&J appears under different circumstances as well. This however, could be the reason for the popularity of certain names and not the QWERTY-effect.

Despite all this the authors (C&J) wanted to do a proper job and eventually they did find a relevant significant influence– although others were not so easily convinced. All in all it seems there must be something there, so this theory is well worth a mass or two.


Even if we don’t go as far as to say that QWERTY influences name giving trends it is remarkable that since the birth and rapid spread of the internet the way we communicate has dramatically changed. Language is no longer solely oral but more and more of our word production happens on our keyboards. Although the source– our thoughts– is still the same, the way of expression has considerably changed and now its great part is shifted to the keyboard.
The assumption that there is an influence here cannot be debated. What it effects and how is a difficult question to answer though. What I find fascinating however in Casasanto and Jasmin’s work is the part which says: to a certain extent the keyboard is shaping the meaning of the words. And I also have the impression that popular media sort of overlooks this fact. No matter how slight this effect that modifies semantic meaning might be and even if the emotional valence of the word itself – whether it has a negative or positive connotation– probably outweighs the QWERTY-induced associations, it’s presence is still a remarkable phenomenon.

That’s why we decided to experiment a little using a Hungarian keyboard– which is special in this case because more letters can be found on the right, fewer on the left, so the asymmetry shifts.
Should we find not more than a tiny little difference as well in reverse, one more piece of evidence could prove that the assumption is correct and the way keys are positioned does have an effect on the physical and consequently the psychological well-being. Which fact therefore will have an influence on the meaning of the words when we read, speak or listen. We have chosen to test the effect traceable while reading. Our findings will be reported in our next post.

For those who wish to lose themselves in the topic, here’s the link to the original article:

Here’s a short summary presented by WIRED:

Here’s the post of Language Log on the QWERTY-effect. The comments are worth reading too:

Another post from Language Log on the name giving trends with neat little graphs showing their results:

by Anna Régeni

Young Statistician Meeting 2016

This week we are presenting our research on using topic models in search and content analysis at the Young Statistician Meeting 2016. You can find our abstract and the accompanying slides below.


Kitti Balogh: Unveiling latent topic structure in anti-Roma discourse using Latent Dirichlet Allocation 

From the mid 2000’s the number of anti-Roma and racist utterances have been increasing in Hungary and this manner of speech has also become accepted in common discourse. The research focused on extracting anti-Roma topics over this period using a hierarchical Bayesian model called Latent Dirichlet Allocation (LDA). The source of the analysis was collected from online newsportal which is the flagship of the far-right media in Hungary. The corpus consists of more than 10.000 anti-Roma news from 2006 until 2015. 27 anti-Roma topics were extracted by using LDA which opens the possibility to analyze the distribution of various topics over time and see how they are connected to the most influential events during the period of investigation. The identified topics correspond to categories identified by qualitative studies on Roma media representation in Hungary. Our research suggests that topic modeling could be a useful supplementary tool to the toolbox of traditional qualitative discourse analysis researchers. Our research project culminated into an interactive data visualization and a data visualization dashboard which can be accessed on following links:


Culture independence vs context dependency- Ekman’s “dangerous” theory

This post is part of a case study of emotion analysis focusing primarily on the theoretical background of text based emotion representation.

Here I wish to point out that exploring the field of text based emotions may reveal information otherwise inaccessible in sentiment analysis. Therefore it may even result in a different kind of benefit that enhances its value.

In order to find out what kind of emotions are “hiding” in texts it is first needed to be defined what we are actually looking for. The simplest solution seems to search for linguistic expressions explicitly indicating a certain emotion. Let’s take a look at some real-life examples:

1 XDDDDDDD well, you know even an innocent smiley can freak you out 🙂

2 Still terrified, the actress turned to the public.

The highlighted items seem worth collecting and adding to a dictionary based on the emotions they express. In order to do that however first the system of categorization need to be defined. The next obvious step for a linguist therefore is to check what psychology has to say about which emotion categories are worth the time.

The method above is the so-called current beaten track of emotion analysis– if such a track exists at all considering the insignificant number of international and Hungarian publications. While searching for the relevant psychological data the language technologist comes across Paul Ekman’s theory. According to Ekman there are six basic emotions– sadness, anger, fear, surprise, happiness and disgust– the facial expressions of which are universal, i.e. independent of the person’s cultural background and mean the same emotional state for everyone.


In the 1970s Ekman and Friesen developed the Facial Action Coding System (FACS) to taxonomize every human facial expression. The method, which is the result of decades of research, describes all observable facial movements for every emotion and by analysing them it determines the emotional state of the person. The fact that both genuine and fake emotions can be precisely identified is the eloquent proof of its reliability.

No wonder Ekman was named one of the top 100 most influential people in the May 2009 edition of Time magazine.


Paul Ekman and Tim Roth, the star of the TV series “Lie to me”.


The widespread popularity of this categorization provided a solid base for emotion analysis in language technology as well. Most relevant studies categorize emotion expressions either directly based on Ekman’s theory (Liu et al. 2003; Alm et al. 2005; Neviarouskaya et al. 2007 a,b; Aman-Szpakowicz 2007) or like us take it for their basis adding some other classes as well e.g.: attraction or tension (Szabó et al. 2015). The argument that these emotions are universal is so convincing that computational linguists almost forget to ask whether this is the very feature they need at all or if this otherwise important fact disguises features which should be an essential part of the analysis?

As I promised in the title of the post I intend to write about Ekman’s “dangerous” theory. I am referring to the book “Darwin’s Dangerous Idea: Evolution and the Meanings of life (1995)” by Daniel C. Dennett here and also drawing a parallel with Ekman’s theory. According to Dennett there are two reasons why Darwin’s theory may be dangerous: First, because his thoughts questioning the privileged role humans were said to enjoy in the universe profoundly shook the foundation of the traditional cosmological approach. He also doubted that life itself should actually have a peculiar ontological status. Second, according to Dennett Darwin’s theory is easy to misunderstand therefore it may generate dangerous misinterpretations. The reason why Ekman’s theory- the ability to read emotions on faces is innately hardwired- is “dangerous” is that it’s so convincing that other aspects of expressing emotions– like facial or linguistic– are easily ignored. One important factor is the role of context in the interpretation of emotions, and it is not exclusively about text analysis.  Let us take a closer look at the phenomenon:

In their article– Language as context for the perception of emotion, 2007– Barrett and her co-authors challenge the idea of innate emotion perception by using a certain photo as an example. The photo was taken of United States Senator Jim Webb celebrating his 2007 electoral victory. Experiments revealed when subjects saw the image of the Senator taken out of context (see image a.) they all said he looked angry and aggressive. When situated however in the original context subjects agreed that he appeared happy and excited.

The result is remarkable considering that not once did the subjects find the senator’s facial expression misunderstandable or confusing but came to the conflicting conclusions automatically and effortlessly.


Barrett (Barrett at al. 2007) considers this phenomenon a paradoxon since it’s rather controversial that there are six facial expressions which are biologically perfectly distinguishable but their interpretations may be absolutely context-dependent. The authors try to come up with an explanation such as words ground category acquisition, but in my opinion this argument is not convincing enough.

In exchange for the Ekman categories here linguistics seems to lend psychology a conceptual framework which needs to be traced back as far as Wilson and Sperber’s Relevance theory (2004). It argues that in any given communication the hearer or audience will search for meaning and having found the one that fits their expectation of relevance will stop processing. In the conceptual framework of lexical pragmatics it all means that the lexeme itself is nothing but an underspecified semantic representation. Consequently it gains its complete meaning only in context (Bibok 2014). Where does this underdetermined meaning come from? Obviously there must be a pragmatic knowledge embracing all information necessary for code development.

As all this sounds rather complicated let us demonstrate how the theory works with an example from the field of sentiment and emotion analysis.

3.a. Suspect of bestial double murder in custody. (

b An American lady had a formidable experience while taking part in a shark cage watch program in Mossel Bay, South-Africa. (

4 Debut of a bestial Volkswagen GTI Supersport Vision Gran Turismo (…) A formidable fastback implementing other aspects of the “GTI” concept.(

According to the idea introduced above in sentences 3a and 3b understanding the highlighted words is based on encyclopaedic information stored in our pragmatic knowledge. This means we have some kind of an idea based on our previous experience of what something bestial or formidable is like. This is basically the encyclopaedic information stored in the underspecified semantic representations of the expressions in question. Using these pieces of information we can find out what they meant to express in the given context. In sentence 4 this encyclopaedic information is not perfectly in line with the current context so the encyclopaedic information in the underspecified semantic representation is not enough and therefore „further” information is necessary. In example 4 the “further” information is the emotive feature of the expressions “bestial” and “formidable”. Consequently we can say that in a situation like this during interpretation it’s the semantic feature indicating emotion or intensity of the studied lexemes that gets activated instead of the prototypical or stereotypical meaning. Put more simply: we don’t think that the new Volkswagen is as bestial as a murder and we need to be scared but instead we know that it’s as effective, impressive and surprising as the amount of emotiveness the phrases “bestial” and “formidable” have.

Considering this process of interpretation a certain parallel may easily be detected between expressing emotions at a textual level and understanding the emotional information faces display. It is evident how these two processes are similar; we are able to interpret the word “bestial” correctly in a context where this interpretation is required based on the sheer emotive semantic features and ignore its prototypical or stereotypical meaning. We are also able to interpret the face of the senator displaying the obvious signs of anger as the expression of excitement and joy if this is the interpretation the context requires.

Although obviously exciting and remarkable in itself I did have a specific reason to discuss the theoretical parallel above. My primary goal was to point out that while emotion analysts (and let’s face it: sentiment analysts as well) often focus on categories, their problems and possibilities, they sometimes forget about significant aspects like the role of context in the interpretation of linguistic- in case of facial expressions non-linguistic- signs. As a result a relevant psychological theory that can successfully be applied in linguistics may easily become “dangerous”.


Alm, C.O.-Roth, D.-Sproat, R. 2005. Emotions from text: machine learning for textbased emotion prediction. In Proceedings of the Joint Conference on Human Language Technology / Empirical Methods in Natural Language Processing (HLT/EMNLP 2005). Vancouver, Canada. 579-586.

Aman, S.-Szpakowicz, S. 2007. Identifying Expressions of Emotion in Text. In Proceedings of the 10th International Conference on Text, Speech, and Dialogue (TSD- 2007), Plzeň, Czech Republic, Lecture Notes in Computer Science (LNCS). SpringerVerlag. 196-205.

Barrett, L.F.-Lindquist, K.A.-Gendron, M. 2007. Language as context in the perception of emotion.Trends in Cognitive Sciences 11. 327-332.

Bibok, K. 2014. Lexical semantics meets pragmatics. Argumentum 10. Debrecen University Press 221-231.

Ekman, P.-Friesen, W.V. 1969. The repertoire of nonverbal behavior: Categories, origins, usage, and coding. Semiotica 1. 49-98.

Ekman, P.-Friesen, W. V.-Ellsworth, P. 1982. What emotion categories or dimensions can observers judge from facial behavior? In P. Ekman Ed. Emotion in the human face. New York: Cambridge University Press. 39-55.

Liu, H.-Lieberman, H.-Selker, T. 2003. A Model of Textual Affect Sensing using Real World Knowledge. In Proceedings of the International Conference on Intelligent User Interfaces, IUI 2003, Miami, Florida, USA.Wilson, D.-Sperber, D. 2004. Relevance Theory. In Ward, G.-Horn, L. eds. Handbook of Pragmatics. Oxford, Blackwell. 607−632.

Neviarouskaya, A.-Prendinger, H.-Ishizuka, M. 2007a. Analysis of affect expressed through the evolving language of online communication. In Proceedings of the 12th International Conference on Intelligent User Interfaces (IUI-07). Honolulu, Hawaii, USA. 278-281.

Neviarouskaya, A.-Prendinger, H.-Ishizuka, M. 2007b. Narrowing the Social Gap among People involved in Global Dialog: Automatic Emotion Detection in Blog Posts, In Proceedings of the International Conference on Weblogs and Social Media (ICWSM 2007). Boulder, Colorado, USA. 293-294.

M.K.Szabó– V. Vincze– G. Morvay 2015. Challenges in theoretical linguistics and language technology of Hungarian textbased emotion analysis. Language– Language technology– Language Pedagogy 21st century outlook 25 MANYE Congress, Budapest