MONOGRÁFICO

A Visualisation Dashboard for Contested Collective Intelligence. Learning Analytics to Improve Sensemaking of Group Discussion

Un panel de visualización para la Inteligencia Colectiva Controvertida. Analíticas de aprendizaje para la mejora de creación de ideas en grupos de discusión

Thomas Daniel Ullmann 1
The Open University, Reino Unido
Anna De Liddo 2
The Open University, Reino Unido
Michelle Bachler 3
The Open University, Reino Unido

A Visualisation Dashboard for Contested Collective Intelligence. Learning Analytics to Improve Sensemaking of Group Discussion

RIED. Revista Iberoamericana de Educación a Distancia, vol. 22, no. 1, 2019

Asociación Iberoamericana de Educación Superior a Distancia

“Los textos publicados en esta revista están sujetos a una licencia “Reconocimiento-No comercial 3.0” de Creative Commons. Puede copiarlos, distribuirlos, comunicarlos públicamente, siempre que reconozca los créditos de la obra (autor, nombre de la revista, instituciones editoras) de la manera especificada en la revista.”

Received: 05 July 2018

Accepted: 06 October 2018

How to reference this article:: Ullmann, T. D., De Liddo, A., y Bachler, M. (2019). A Visualisation Dashboard for Contested Collective Intelligence Learning Analytics to Improve Sensemaking of Group Discussion. RIED. Revista Iberoamericana de Educación a Distancia, 22(1), pp. 41-80. doi: https://doi.org/10.5944/ried.22.1.22294

Abstract: The skill to take part in and to contribute to debates is important for informal and formal learning. Especially when addressing highly complex issues, it can be difficult to support learners participating in effective group discussion, and to stay abreast of all the information collectively generated during the discussion. Technology can help with the engagement and sensemaking of such large debates, for example, it can monitor how healthy a debate is and provide indicators of participation’s distribution. A special framework that aims at harnessing the intelligence of - small to very large – groups with the support of structured discourse and argumentation tools is Contested Collective Intelligence (CCI). CCI tools provide a rich source of semantic data that, if appropriately processed, can generate powerful analytics of the online discourse. This study presents a visualisation dashboard with several visual analytics that show important aspects of online debates that have been facilitated by CCI discussion tools. The dashboard was designed to improve sensemaking and participation in online debates and has been evaluated with two studies, a lab experiment and a field study in the context of two Higher Education institutes. The paper reports findings of a usability evaluation of the visualisation dashboard. The descriptive findings suggest that participants with little experience in using analytics visualisations were able to perform well on given tasks. This constitutes a promising result for the application of such visualisation technologies as discourse-centric learning analytics interfaces can help to support learners’ engagement and sensemaking of complex online debates.

Keywords: learning analytics, collective intelligence, argumentation, online discussion, information visualisations, online deliberation, sensemaking, learning analytics, dashboard.

Resumen: La habilidad para participar y contribuir en los debates es importante para el aprendizaje informal y formal. Especialmente cuando se abordan temas altamente complejos, puede ser difícil apoyar a los alumnos que participan en una discusión grupal efectiva y mantenerse al tanto de toda la información generada colectivamente durante la discusión. La tecnología puede ayudar con el compromiso y razonamiento en debates tan grandes, por ejemplo, puede monitorear cuán saludable es un debate y proporcionar indicadores sobre la distribución de la participación. Un marco especial que pretende aprovechar la inteligencia de grupos de pequeños a muy grandes con el apoyo de herramientas de discurso y argumentación estructuradas es la Inteligencia Colectiva Controvertida (CCI). Las herramientas de CCI proporcionan una fuente rica de datos semánticos que, si se procesan de manera adecuada, pueden generar un sofisticado análisis del discurso en línea. Este estudio presenta un panel de visualización con varios análisis visuales que muestran aspectos importantes de los debates en línea que han sido facilitados por las herramientas de discusión de CCI. El tablero de instrumentos fue diseñado para mejorar la creación de sentidos y la participación en los debates en línea y se ha evaluado con dos estudios, un experimento de laboratorio y un estudio de campo, en el contexto de dos institutos de educación superior. Este artículo informa sobre los resultados de una evaluación de usabilidad del panel de visualización. Los hallazgos descriptivos sugieren que los participantes con poca experiencia en el uso de visualizaciones analíticas pudieron desempeñarse bien en determinadas tareas. Esto constituye un resultado prometedor para la aplicación de tales tecnologías de visualización, ya que las interfaces analíticas de aprendizaje centradas en el discurso pueden ayudar a apoyar el compromiso de los alumnos y su razonamiento en debates en línea complejos.

Palabras clave: análisis de aprendizaje, inteligencia colectiva, argumentación, discusión en línea, visualización de información, deliberación en línea, razonamiento, análisis de aprendizaje, tablero de instrumentos.

Social constructivist ideas are prevalent in educational research. They postulate that social interactions, such as group discussions and debates, are important for learning as meaning is constructed mainly socially through the interaction with other individuals (van Merriënboer & de Bruin, 2014). Contested Collective Intelligence (CCI) (De Liddo, Sándor, & Buckingham Shum, 2012), a specific discourse-centric form of Collective Intelligence (Malone & Klein, 2007; Malone, Laubacher, & Dellarocas, 2010), is an upcoming research area that focuses especially on structured discourse and argumentation, which makes it also a viable area of research for education. Mostly facilitated by argumentation-based online discussion tools (Scheuer, Loll, Pinkwart, & McLaren, 2010), CCI aims at supporting collective sensemaking of complex societal dilemmas and seek to improve our collective capability to face complex problems by talking to each other and debating online. One of the key issues shared by most common platforms for online debate, also used in formal and informal education context, is poor summarization and visualization (Klein & Convertino, 2014; Smith & Fiore, 2001). In fact, online debate is still heavily dominated by text-based content, while most online users nowadays wish to have access to easy-to-understand image/video-based content that they can grasp rapidly and share easily with their peers. Even though, conveying results of online discussions with effective visualization methods is an open challenge. How would you visualize what happens in an online discussion community? How can we make ideas and arguments more tangible so that they can be easily grasped, understood and shared? This paper presents research work aimed at answering those questions by describing the design and development of a Visual Analytics dashboard for Collective Intelligence (CI), namely the CI Dashboard. The CI dashboard is a new Visual Analytics service which supports debate summarization, understanding and sensemaking by providing a variety of alternative visualizations of the state, content and results of an online discussion, as well as of the participation dynamics of the people involved.

In the last decade, Visual Analytics have been subject of a large corpus of research in the Learning Analytics community. In particular, the study of learning analytics dashboards aims at raising awareness and reflection about learning (Kravcik et al., 2017) by visualising learner traces (Verbert et al., 2014). In this context, the CI dashboard can be conceived as a learning analytics service, which visualises the traces that learners left in structured group discussion (namely learners’ ideas, questions, and arguments) in a variety of useful ways to improve sensemaking and quality of participation.

This paper presents the CI dashboard concept and preliminary results of user testing carried out to assess usability and usefulness of five CI dashboard visualizations. We conclude with lessons learned and future challenges for the development of discourse-centric collective intelligence technologies and reflect on the implications of applying such technologies in formal and informal learning context.

BACKGROUND KNOWLEDGE AND MOTIVATION

Nowadays online debate is dominated by social media platforms and it is widely text-based. At the same time, we are witnessing a continuous shift toward multimedia devises and interfaces. This is both a result of users’ need for more intuitive and ‘quick’ ways to grasp complex information (Bennett, Maton, & Kervin, 2008; Harasim, 2000), and a way to also bridge issues of digital literacy and even translation barriers which can hinder international collaboration and collective intelligence at large scale.

The importance of text-based information is diminishing and the importance of symbolic information, interactive images and multimedia information is increasing, which are considered to enhance lay understandings in fields such as health, patient communication, and individual decision-making processes (Adams, 2010). As an image can capture 1000 words, interactive visualizations can compress and convey advanced data analytics and provide intuitive instruments for users to explore and make sense of online debates in new interesting ways.

Previous research demonstrates that important discussion’s dynamics are lost when online debate is predominantly constructed in textual forms and represented with linear interfaces and suggests that analytics and visualizations can improve community understanding of the online discussion by making these dynamics visible (De Liddo, 2014).

The research literature documents the advantages, and challenges, of making the structure and status of a dialogue or debate more visible (Buckingham Shum, 2003). Following Concept Mapping (Novak, 1998), research on computer supported arguments visualization (CSAV) focuses on making users’ lines of reasoning and (dis)agreements visually explicit. Network visualisation of arguments, otherwise called argument maps, are semantic networks of discourse elements and can provide computational intelligence to what has been defined Web 2.0 Argumentation technologies (Buckingham Shum & others, 2008). This semantic augmentation has also brought to the proposal of semantic web standards for an Argument Web (Rahwan, Zablith, & Reed, 2007), which aims to enable new levels of advanced large scale and across platform online discussion data analytics that are powered by recent developments in Arguments Mining research.

In the field of online discussion within common social media (such as for instance forum, blogs, and news commenting), recent findings suggest that argument network visualizations can effectively improve online debate by facilitating higher- level inferences and by making the debate more engaging and fun (De Liddo & Buckingham Shum, 2014). At the same time some researchers argue that a bigger number of participants to the discussion and a higher complexity of the discourse ontology may make graph visualizations too clumsy to be usable (Hair, 1991; Scheuer et al., 2010). Argument visualizations have proved both advantages and limits, it therefore remains unclear what are the main factors influencing the suitability of graphical representation of arguments to support online discussion. Expanding on this research gap, even less in known on what alternative visualisations can be designed to effectively convey results of online deliberation.

Currently, the sense-making and the interpretation of complex data is usually left to analytics experts (Rienties et al., 2017). However, the users of debate applications are not primarily experts in making sense of visual analytics. Their primary aim is to participate in online discussions and therefore we cannot assume that they have the knowledge nor the skills to use analytics visualisations to understand the underlying conversation. If we want to empower users with sophisticated visualisations, we need to understand whether relatively unexperienced users can use these visualisations to solve debate related tasks.

The research presented in this paper contributes to the assessment not only of argument network visualisations but of a variety of different graphical representations of an online debate’s content, and it investigates what are the advantages and affordances of these visualisations to support online discussion and large scale Contested Collective Intelligence.

To conduct this study, we designed and developed a visual analytics dashboard for collective intelligence1, which provides a variety of alternative visualizations of the state, content and results of an online discussion and can be embedded to any IBIS-based online discussion tool (De Liddo et al., 2014b). In the following we describe the main concept and components of the CI Dashboard and presents initial insights on the users testing.

THE CI DASHBOARD

The visual analytics dashboard for collective intelligence (CI dashboard) is an open online service that provides analytics visualisations for argumentation-based CI platforms (otherwise called CCI systems; De Liddo et al., 2012). These are online systems, such as ideation, discussion, knowledge mapping and co-creation tools, which use a modified IBIS argumentation data model to structure online users’ interaction (De Liddo et al., 2014b). Some examples of argumentation-based CI systems are DebateHub2 and Assembl 3.

DebateHub is an online discussion tool which has a ‘forum like’ user interface but behind the scene allows discussion data to be structured in terms of IBIS elements, such as issues, answers, pro- and contra arguments. Assembl is a collaborative solution building tool which allows to mobilise large number of people around tackling complex problems. Similar to DebateHub, Assembl supports different stages of a collaborative problem solving process (from ideation, to discussion to idea selection and synthesis) and uses various forum and wiki-like interfaces to gather users’ contributions along this collaboration process. In the back end Assemble also uses a simplified IBIS data model to structure users’ data. This common data structure enables the automatic development of advances visual analytics and semantic visualisations (Nazemi et al., 2009; Ullmann et al., 2009) which are provided by the CI dashboard service.

Users Requirements and Design Concepts

The CI Dashboard was designed and developed in the context of CATALYST4, a project funded by the European Commission, which aimed at providing new argumentation-based CI technologies to facilitate the emergence of collective intelligence in a social innovation context. As part of the project we conducted an initial user’ requirements analysis. The main result of this analysis has been the definition and prioritisation of 10 pain points of existing social media platform that harness the collective intelligence of large groups of people through structured online discussions (see page 7 of the Catalyst deliverable5De Liddo et al., 2014a). The top three ranked problems identified by the community were poor commitment to action, poor summarisation and poor visualisation page (see page 25 of the Catalyst deliverable De Liddo et al., 2014a).

80 percent of the 50 consulted social innovation experts indicated that poor summarisation and visualisation are a key issues of current online discussion platforms. Users report that common online discussion spaces do not provide a useful overview of an online debate and this undermines the participation and quality of contributions, from both the perspective of newcomers and community manager. The task of the community managers is to monitor the community, keep it up to date on the state of a debate, detect problems and communicate progresses. They reported a lack of appropriate visualisation tools to monitor the online community and summarise debate outcomes and dynamics.

The CI dashboard has been designed to directly address these last two challenges of poor visualisation and summarisation of the state and progress of an online debate. The needs expressed by the users were converted in a higher-level goal and system requirements for the CI dashboard.

Provide a variety of Visual Analytics interfaces to:

• summarise the state and progress of a debate,

• identify and assess important contributions,

• focus users’ attention on missing, contradictory, conflicting points

• highlight hidden content, social, behavioural and discourse patterns in the debate.

The CI dashboard was then designed as an online visualisation service. The CI dashboard website (available at cidashboard.net) gives an overview of all analytics visualisations available to the users, allows viewing them individually with a demo and own data, allows to assemble a custom dashboard of visualisations, and provides the information necessary to embed the visualisations or the custom dashboard into other argumentation-based discussion platforms.

The CI dashboard is an analytics visualisation service provider for other CI and online discussion platform providers, like Assembl - a large scale co-production system, DebateHub - a hub for structured debates, or LiteMap6 - a debate mapping tool (example community platforms in Figure 1). The communication between the analytics visualisation provider and the platforms is based on a standardised data format - the CATALYST Interchange format 7(CIF). The CIF format is modelled in terms of RDF and is serialised as JSON-LD (see Data Layer in Figure 2). It provides a standardised description of online conversations. The CI dashboard uses the CIF data either directly to generate visualisations, or it requests CI statistics from a metric service (Figure 1). This service calculates CI specific metrics from the CIF data and provides these to the CI dashboard (Klein, 2014; Parent et al., 2015; Ullmann et al., 2014). Other services, such as for instance a Social Network Analysis (SNA)(Sie et al., 2012) service (see example Services in Figure 1) can be also built from the CIF data to provide additional analytics to the CI dashboard.

Figure 1. CI dashboard integration context. The figure shows the CI Dashboard communication with external platforms and services. Specifically, the CI Dashboard (central column) receives input data from a series of community platforms (right column), pass the data onto various external analytic services (left column), which return relevant metrics, that are finally used by the CI Dashboard to produce the Analytics Visualisations (central column).
CI dashboard integration context. The figure shows the CI
        Dashboard communication with external platforms and services. Specifically, the
        CI Dashboard (central column) receives input data from a series of community
        platforms (right column), pass the data onto various external analytic services
        (left column), which return relevant metrics, 
        
        that are finally used by the CI Dashboard to produce the Analytics
        Visualisations (central column).

The architecture of the CI Dashboard Service (Figure 2) was designed to receive CIF structured data from community platforms, which is then analysed by various analytics services and finally produce visualisations. This architecture enables easy improvement and collaborative development of new powerful analytics and visualisation on top of argumentation data.

Figure 2. Architecture of the CI Dashboard. The Architecture consists of three layers: 1) The Data Layer, which requires the data from discussion platforms to be formalised in the CIF standard, designed by the Catalyst’s development team to capture discourse data, 2) the Transformation Layer, in which the data is filtered, aggregated and mapped into the visualisations that the user has selected, 3) finally in the Visualisation Layer the visual analytics are packaged in a dashboard widget that can be embedded in any website.
Architecture of the CI Dashboard. The Architecture consists
        of three layers: 1) The Data Layer, which requires the data from discussion
        platforms to be formalised in the CIF standard, designed by the Catalyst’s
        development team to capture discourse data, 2) the Transformation Layer, in
        which the data is filtered, aggregated and mapped into the visualisations that
        the user has selected, 3) finally in the Visualisation Layer the visual
        analytics are packaged in a dashboard widget that can be embedded in any
        website.

At present the CI dashboard contains a growing list of analytics visualisations (Ullmann et al., 2014). This paper reports parts of the results of the evaluation of five CI dashboard analytics visualisations.

CI Dashboard Visualisations

Five of the visualisations of the CI dashboard are evaluated. Each visualisation is designed with a specific analytical task in mind and highlights a particular aspect of a debate. For example, the debate network visualisation focuses on the content of the conversation. It aims to show how well issues have been divided into several ideas and how these ideas have been supported or countered with arguments (Figure 4). The conversation nesting visualisations focuses on the structure of the conversation. It gives a sense of the distribution of the conversation types, without focusing the viewer on the actual comments as does the debate network visualisation (Figure 5). The user activity analysis visualisation provides a sense about which users influence the conversation (Figure 6) and the activity analysis visualisation shows the evolution of a conversation over time (Figure 7). These four visualisations focus on a specific aspect of a conversation, such as the content, the conversation structure, the conversation participants, and the conversation evolution. The overview visualisation on the other hand provides an aggregate of many analytics in one view, such as an indicator of the health of the conversation, its users, its evolution over time, as well as other information (Figure 3). Here, we want to test both types of analytics, aggregate as well as focused visualisations.

Quick overview visualisation

This visualisation (Figure 3) provides an overview about important aspects of a conversation. It is structured in several sections. On top are three traffic lights. Each traffic light indicates the health of a conversation. The first traffic light indicates the degree of participation, the second traffic light viewing activity, and the third traffic light shows the degree of contribution. A green traffic light symbolises that everything is okay, orange indicates that there might be problem and red flags a problem.

The traffic light indicators help focusing on the difference between lurking and active participation to the debate. It also allows to distinguish between active participation with simpler activities, such as voting on other people idea, and adding new contributions, which represents a more advanced form of participation to the debate. With this visualisation the community moderators, or the tutors or teachers in case of application to the educational context, can monitor participation within the online group and take remedial actions where unhealthy participation is taking place.

The second section shows a mini bar chart, each bar shows frequencies of contribution types. It shows the amount of issues, ideas, supporting and counter arguments, and the number of votes of the conversation. The third section shows the viewing activity over time with a sparkline. It follows three sections that point out textually the most voted entry, the most recently voted entry, and a list of new entries. The texts are clickable and lead directly to the entry. The last section provides word count statistics.

Figure 3. The Quick overview provides various ‘small’ visualisations, which helps focusing only on emerging patterns or picks in summary statistics; such as for instance: trends in users contributions (contribution histogram), distribution of users actions (spike line of viewing activities), highlights of the most voted ideas, most recent contributions, etc.
The Quick overview provides various ‘small’ visualisations,
        which helps focusing only on emerging patterns or picks in summary statistics;
        such as for instance: trends in users contributions (contribution histogram),
        distribution of users actions (spike line of viewing activities), highlights of
        the most voted ideas, most recent contributions, etc.

Debate network visualisation

This visualisation shows the network structure of a debate (Figure 4). It is basically an argument map, which can be defined as a network graph similar to a social network visualisation8, but which instead of displaying actors and their relations, it shows the connections between different types of contributions to the debate, semantically labelled as: issues ideas and arguments (pro and con). Figure 4 shows an enlarged area of the conversation network showing in the middle the central issue under discussion. Several ideas are contributed to this issue and each of the idea received either support or opposition. Users’ can zoom out to see the whole debate network or all networks that are part of the conversation. By clicking the nodes of the network user can navigate to the actual entry of the conversation in the native online discussion tool were the conversation took place.

Figure 4. Debate network visualisation, shows how the questions, ideas and arguments (pros and cons), that where proposed in the online discussion connect between them, and give shape to an argument map
Debate network visualisation, shows how the questions, ideas
        and arguments (pros and cons), that where proposed in the online discussion
        connect between them, and give shape to an argument map

Conversation nesting visualisation

This visualisation9 shows an entire conversation as zoomable nested circles of contributions. Each type of contribution is colour coded. The conversation visualised (see Figure 5) shows three issues (light purple). One issue received several ideas (dark purple), while the other issues displayed at the bottom did not contain many ideas. Supporting arguments are displayed with green circles and red circles represent counter arguments. A click on a circle zooms into the circle and shows the aspect of the conversation in detail. Hovering over a circle displays the title of the contribution.

The Conversation Nesting Visualisation provides at a glance information such as: what issues are most debates (these are the biggest light purple circles, with the highest number of embedded circles), which ideas have been most opposed to or supported by participants (this can be identified by looking at the dark purple circles with the highest number of red or green circle embedded). The overall idea behind this visualisation is that, as the conversation becomes larger, this visualisation will help to focus on emerging discussion patterns that would otherwise difficult to detect.

Figure 5. Conversation nesting visualisation. Each contribution to the debate is represented by a circle. When a circle is selected the border is highlighted in black, and a roll over message shows the content of the contribution. In figure, the pink issue with black border is the selected contribution, and the roll over message shows the text of the contribution: ‘What should the medium of our final class project be?’)
Conversation nesting visualisation. Each contribution to the
        debate is represented by a circle. When a circle is selected the border is
        highlighted in black, and a roll over message shows the content of the contribution.
        In figure, the pink issue with black border is 
        
        the selected contribution, and the roll over message shows the text of
        the contribution: ‘What should the medium of our final class project be?’)

User activity analysis visualisation

This visualisation 10shows the user’s activities in form of an ordered bar chart (see Figure 6). On the left side it shows the user with most activity and on the right, it shows the user with the least activity. Each bar is stacked. A stack represents a specific contribution type. Light purple represents the amount of added issues, dark purple the amount of added ideas, green the amount of added supporting arguments, red the amount of added counter arguments, and yellow the number of votes made by this user. The bar chart below the user bar chart shows the overall amount of contributions by type. Both parts of the visualisation are interconnected and offer dynamic and cumulative filtering. A click on a user updates the lower horizontal bar chart and shows only the contribution types of the user. Several users can be selected, which is instantly reflected in changes of the lower bar chart. Also, the interaction with the horizontal bar chart automatically updates the user bar chart. Not displayed here is a table that represents the information of the bar charts in tabular form (Figure 7 shows such table).

Figure 6. User activity analysis visualisation. The top histogram shows the users ordered by contribution frequency, from the most active (U1 with 16 contribution) to the least active (U11,12,13,6,8) with 1 contribution. The bottom histogram shows the most frequent contribution type (in this case supporting arguments were contributed the most, with 20 contributions overall in the group).
User activity analysis visualisation.
        The top histogram shows the users ordered by contribution frequency, from the
        most active (U1 with 16 contribution) to the least active (U11,12,13,6,8) with
        1 contribution. The bottom histogram shows the most frequent contribution type
        (in this case supporting arguments were contributed the most, with 20
        contributions overall in the group).

Activity analysis visualisation

The top area in Figure 7 shows a bar chart with the users’ activity for each single day. The middle area shows three horizontal bar charts. On the left it shows the activity per day (Monday, Tuesday, etc.), in the middle the activity split by contribution type (issue, idea, supporting and counter argument), and the right bar chart shows the activity split by activity types (create, update, viewed). The third area in Figure 5 represents the data of the visualisation in form of a table. Each row shows information about the date of the contribution, its title, its type, and its activity type. As with the Users Activity Analysis Visualisation, a click on one part of the visualisation instantly updates the other parts of the visualisation. For example, users can select a date range, which updates the middle area and the table. Another

example is that a user can select one or more bars of the middle area in order to change the timeline visualisation on top and the table at the bottom.

Figure 7. Activity analysis visualisation
Activity analysis visualisation

USER TESTING: SET UP

The evaluation presented in this paper is based on two studies. The first study was conducted as a field experiment in the open, while the second study took place in a usability lab. The main difference between the two study’s conditions is the presence of a facilitator. Our aim is both, to test the visualisations in terms of readability and usability, and also to understand to what extent the presence of a facilitator affects readability and usability of the visualisations.

The participants of the field experiment were group members of the DebateHub 11 ‘Design community 2014’ group12. This group used DebateHub to discuss group issues, to come up with ideas and to weight the ideas with supporting or counter arguments. The group members have been invited to participate in the evaluation via Email. Participation was voluntarily. The lab experiment participants voluntarily agreed to take part in the evaluation study. They followed the advertisement of the evaluation study distributed through several channels of the Open University.

The general setup for both studies was the same, while the concrete implementation differed in order to adapt to the context of the study. The building blocks of the general setup consisted of a background questionnaire, an introduction to the scenario, a phase were the participants explored the visualisation on their own time, a task, and a questionnaire to evaluate the usability of the visualisation.

The participants of the field experiment received an Email with links to a questionnaire 13 for each visualisation. The questionnaire guided the participants through all building blocks of the evaluation.

In the Lab context, the facilitator of the study guided participants along the testing. Participants were asked to fill out the background questionnaire, they were verbally introduced to the scenario, had time to explore the visualisation on their own, they could ask questions, they were asked the same task questions, and they had to fill out the same usability questionnaire.

The scenario consisted of asking the participants to be part of a large online conversation. This conversation being so large that manual inspection of the contributions is neither reliable nor effective. Instead participants should use the analytics visualisations to make sense of the conversation.

Each visualisation contained a small task. Participants had to answer three to four questions for each visualisation. The participants could find all solutions to the task by using the visualisation. For example, two of the four questions for the user activity analysis visualisation were: ‘How many times did the most active user contribute to the debate?’ and ‘How many counter arguments have been made in the whole debate?’. The data for the visualisations were taken from the CIF file generated from the conversation of the ‘Design community 2014’ group. The data were the same for both groups.

EVALUATION

In the following we describe the background information of the participants, their task performance, and the usability scores for each visualisation. The field experiment participants as well as the lab participants could stop the task anytime. In the case of the field experiments, the participants were not required to fill out all questionnaires. During the lab study on average two visualisations were evaluated per participant14.

Background information

Field experiment: On average 7.4, mostly female, participants filled out the questionnaire for each visualisation (40 questionnaires filled in). Most of them visited DebateHub between two to ten times. Most participants made one contribution, and a few made more than 10 contributions. Analytics dashboards were mostly new to them, they had slightly more familiarity with visualisations to explore data.

Lab experiment: 12 participants evaluated the visualisations (5 female and 7 male). Each visualisation was rated by five of them. Their familiarity with analytics dashboards ranged from novice to advanced, they were mostly novices with visualisations to explore data, but also all levels of familiarity (from novice to expert) were present.

Task performance

All visualisations have been designed with the aim to help participants to make sense of large-scale online discussions. Still, in this evaluation we do not aim to assess sensemaking of the online debate but, in the first instance, we aim to test users’ capability to read and make sense of the visualisation. Which means their capability to extract and understand the information the visualisation was designed to convey. Each visualisation focuses on a specific perspective of the discussion highlighting certain data over other information. Participants were therefore given the task to read and make sense of the visualisations correctly. To test how well the participants perform in making sense of the visualisations, we constructed tasks that are meant to capture the essential workings of each visualisation. The appendix lists all these tasks. For example, participants using the overview visualisation should be able to tell how many people participated, to have a sense about the state of the debate (i.e. how many counter arguments have been made, and to understand how much attention that debate received (i.e. what has been the viewing activity). Participants using the debate network visualisation, should be able to quickly understand which are the popular issues (issues with many responses), what are the ideas that have been challenged, and which are the ideas that are well connected. The conversation nesting visualisation should provide a quick sense of the amount argumentation as well as to highlight solutions without any argumentation. The user argumentation visualisation shifts the analytical focus on the participants of the conversation. The visualisation should quickly provide a sense of who is dominating the discussion, who is contributing most with ideas, etc. The activity analysis visualisation provides a view that considers the evolution of the conversation over time. Users of that visualisations should be able to quickly determine peak times of activity as well as to get a sense when the conversation is over.

Table 1: Task performance shows the performance of the participants on the tasks for each visualisation. It shows the number of participants (N), the number of questions (Questions), and the percentage of correct answers. For example, saying that the field experiment group answered in 90% of the cases correctly means that out of 40 answers 4 were answered incorrectly.

Table 1. Task performance
Field Lab
Visualisation Questions N % correct N % correct
Quick overview 4 10 90 5 100
Debate network vis. 3 6 72 5 67
Conversation nesting 3 7 100 5 87
Activity analysis 4 4 75 5 80
User activity analysis 4 6 75 5 90

The lowest amount of correct answers where given to the Debate Network Visualisation (Table 1: Task performance) with a 67% of correct answers in the Lab group. While the highest performing visualisations were the Quick Overview and the Conversation Nesting with 100 percent of correct answer respectively in the Lab and Field test.

Therefore, we can conclude that most users could effectively read and make sense of the visualisations and the specific debate information that they were designed to highlight (with a lowest score of 33% of wrong answers. Nonetheless, users found the network like visualisation (Figure 4) hardest to read and make sense of, when compared to circle packing (Figure 5), histograms (Figure 6, 7) or other types of alternative visualisations such as ‘red light’ and general table views (Figure 3).

Usability

The usability of the visualisations was measured with the SUS usability questionnaire ( Bangor, Kortum, & Miller, 2009, 2008; Brooke, 2013). Table 2:

‘Usability’ shows the results of the usability questionnaire. The table shows the calculated SUS usability indices and the average of the ratings to the question ‘Overall, I would rate the user-friendliness of this visualisation as [worst, awful, poor, ok, good, excellent, best]’. The usability of all visualisations has been rated between ok and excellent. The participants of the field experiment rated most visualisations as good, with the exception of the quick overview visualisation, which was rated as ok. Similarly, was the lab group, which rated all but the debate network visualisation as good.

From a usability perspective, we can therefore confirm similar results as from the task performance analysis. Debate Network where considered less usable while the Conversation Nesting visualisation was considered the most usable from both Field and Lab groups.

Interestingly enough, even though the Quick Overview was considered readable by the users in the task performance the usability scores for this visualisation where overall the lowest, for both field and lab groups.

Table 2. Usability
Field experiment Lab experiment
Visualisation N SUS Over. N SUS Over.
Quick overview 9 57.50 4.22 5 86.0 5.2
Debate network vis. 6 67.08 4.50 5 68.0 4.4
Conversation nesting 6 81.67 5.83 5 78.5 5.4
Activity analysis 4 53.75 4.50 5 79.5 5.4
User activity analysis 6 67.08 4.67 5 71.0 5.2

DISCUSSION AND CONCLUSIONS

Overall, the participants performed well on the task independently from the two testing conditions (in the field and in the lab). This is an encouraging finding, considering that most of the participants were novices regarding analytics dashboards and visualisations to explore data. This implies that the effective use of the visualisations does not require specific learning or facilitation. This finding is particularly important for online distance education settings in which face-to-face facilitation cannot be provided. The visual analytics’ usability and readability is not affected by the presence of a facilitator in the room.

Both groups performed very well on the task of the quick overview visualisation and conversation nesting, and both groups made mistakes in less or equal to one third of the questions of the debate network visualisation. More or equal to 75% of the questions got answered correctly for all other visualisations.

Most visualisations were rated as having a good usability. Overall, the lab participants rated the usability as higher than the field experiment group. Differences have been found in the quick overview visualisation and the activity analysis visualisation, which was found more usable in the lab group. This may imply that these visualisations are more complex to read and therefore facilitation has improved task performance. All other three visualisations have been on the same level. The participants had more problems with answering the tasks for the debate network visualisation. The low ratings, relative to the other ratings, of the usability may indicate usability issues that need to be followed up.

Our findings suggest that participants with little experience in using analytics visualisations have been able to use very different interactive visualisations and successfully performed various information-seeking tasks, independently from the presence of a facilitator (in the lab vs in the field experiment context). These findings confirm that visualisations can be intuitive also to non-experts and can help summarising online debate content by conveying complex information in a compact and usable interface. Future research steps should focus on assessing to what extent these visual summaries can improve the emergence of Contested Collective Intelligence and most specifically to what extent they can improve sensemaking of complex debate dynamics.

From the perspective of research on Argument Visualisation, our findings also indicate that alternative visualisations of tree-like argument structures, such as for instance the circle packing visualisation (Figure 5), are considered more usable and readable by users. Of course, these findings need to be checked against cases in which the online discussion content is larger. In these cases, in fact, argument network visualisations have proved to perform better than linear visualisations in information seeking tasks (De Liddo & Buckingham Shum 2014). We speculate this is due to the fact that large conversational data makes it increasingly difficult for users to identify relevant content. Therefore, the network structure, even if harder to read, it indeed helps in identifying relevant information when data increases. It would be interesting to compare alternative, more usable and readable, visualisations of arguments (such as the ones suggested in this paper) with common linear interfaces for online discussions to find out to what extent these visualisations can also improve large-scale information seeking tasks.

Future Research and Reflections on Challenges and Opportunities for Learning and Technology Mediated Online Participation

The CI dashboard provides a wide range of useful and usable visualisations (over 30 visualisations between large and small visualisations and over 20 analytics alerts) readily available for their application to analyse a multitude of different facets of online discussion. In this paper, we tested 5 of these visualisations, for which users with no training and little facilitation rated the usefulness and usability from good to excellence. This suggests promise for the application of advanced visual analytics as a mean to improve quality participation of online users to large scale discussion and deliberation processes.

Online discussion and deliberation are key components in informal and formal learning. Socio cultural approaches to learning have clearly pointed out the importance of quality dialogue for successful learning (Mercer, 2004). Therefore, providing technologies for better learning dialogue is a key challenge to be addressed. In previous research, we have argued that ‘if learning dialogues and their outcomes are representative indicators to better scaffold the learning process, then argumentation theory and argumentation tools can improve the ways in which those processes can be analyzed and understood’ (Page 2, De Liddo et al, 2011).

In this paper, we have provided some prove to this claim, in that we showed how visual analytics built on argumentation data provide usable and readable aids to the exploration of online discussion data, also with low training and no facilitation.

Visual Analytics for learning are at the core of research efforts in learning analytics, and in particular discourse-centric learning analytics (Knight and Littleton, 2015; De Liddo et al, 2011). These are analytics built from discourse data, produced through dialogue and discussion processes, in formal and informal learning contexts. Learning Analytics research have evidenced the importance of providing usable understandable learning analytics interfaces to improve different aspects of the learning process (Lukarov et al, 2015; Ferguson 2012). In this context, the CI dashboard can be applied as visual learning analytics service to improve the quality of learners’ participation in structured group discussions. The Dashboard provides visual summarisation and analytics of the structure and content of the debate, as well as of the learners’ participation to the debate (see for instance the traffic light visualisations at Figure 3). These visual analytics have proved usable and readable to untrained users.

Still, in order to facilitate large scale deliberation, one of the key challenges is users’ participation and technology update. Users are often reluctant to try new technologies, even when there are ‘promised’ advantages. This is a fundamental issue, since research has indicated that main stream online discussion technologies (social media such as blogs, forums, Facebook, Twitter, and so on) are not good at supporting informed dialogue and quality discussion, they do not support knowledge co-creation but rather promote dividing discourse and platform island within online communities.

Alternative technologies for online discourse aim to use Visual Analytics interfaces as a mean to promote quality debate and informed deliberation. Visual Analytics have an element of ‘fun’ (De Liddo & Buckingham Shum 2014), which can play a key role to trigger users’ engagement, especially from young and more digitally native learners, but visualisations may be too complex. This complexity can scare participants away. Additionally, much raw data from analytics can be hard to interpret without training. This creates a tension in the user interface design of visual analytics technologies, which must have the right balance between obtrusiveness and discoverability of the information about the online deliberation and Collective Intelligence process.

This paper shows promising results of the usability and readability of visual analytics in both facilitated and non-facilitated setting, but still a lot needs to be understood about the reasons why certain visualisations are more usable and understandable than others, in specific contexts and tasks. Future Research should focus on finding viable solutions to the design of simple, usable but still highly explicative visualizations that can improve not only the quality of deliberation but also the level of participation to the debate.

Acknowledgements

This work was carried out as part of the CATALYST project, which is funded by the European Commission (grant agreement #6111188).

Notes

7. The CATALYST interchange format CIF is available at:
https://raw.githubusercontent.com/catalyst-fp7/ontology/master/context.jsonld
8. CATALYST developed EdgeSense - a dedicated social network analytics tool for

Collective Intelligence (http://catalyst-fp7.eu/open-tools/edgesense/).

9. This visualization is based on the D3.js library.
10. This visualization is based on the dc.js library.
13. Created with the maQ-online questionnaire generator developed by Ullmann (2004) available online at: http://maq-online.de
14. The exact detail can be found in the project report of Ullmann et al. (2014).

REFERENCES

Adams, S. A. (2010). Revisiting the online health information reliability debate in the wake of “web 2.0”: An inter- disciplinary literature and website review. Int. J. Med. Inf. 79, 391-400. doi: https://doi.org/10.1016/j.ijmedinf.2010.01.006

Bangor, A., Kortum, P., & Miller, J. (2009). Determining what individual SUS scores mean: Adding an adjective rating scale. Journal of Usability Studies, 4, 114-123.

Bangor,A.,Kortum,P.T.,&Miller,J.T.(2008). An Empirical Evaluation of the System Usability Scale. Int. J. Hum.-Comput. Interact, 24, 574-594. doi: https://doi.org/10.1080/10447310802205776

Bennett, S., Maton, K., & Kervin, L. (2008). The ‘digital natives’ debate: A critical review of the evidence. Br. J. Educ. Technol. 39, 775-786. doi: https://doi.org/10.1111/j.1467-8535.2007.00793.x

Brooke, J. (2013). SUS: a retrospective. J. Usability Studies, 8, 29-40.

Buckingham Shum, S. (2003). The Roots of Computer Supported Argument Visualization. In P. A. Kirschner., S. J. Buckingham Shum, & C. S. Carr (Eds.), Visualizing Argumentation. Computer Supported Cooperative Work. Springer, London, (3-24).

Buckingham Shum, S., others (2008). Cohere: Towards web 2.0 argumentation. COMMA 8, 97-108.

De Liddo, A. (2014). Enhancing Discussion Forums with Combined Argument and Social Network Analytics, in A. Okada, S. Buckingham Shum, & T. Sherborne, (Eds.), Knowledge Cartography, Advanced Information and Knowledge Processing. Springer London, (333-359).

De Liddo, A., & Buckingham Shum, S. (2014). New Ways of Deliberating Online: An Empirical Comparison of Network and Threaded Interfaces for Online Discussion, in E. Tambouris, A. Macintosh, & F. Bannister, F. (Eds.), Electronic Participation, Lecture Notes in Computer Science . Springer Berlin Heidelberg, (90-101).

De Liddo, A., Buckingham Shum, S., & Catalyst, C. (2014a). Analysis of Pain Points & User Feedback on Design Concepts (Deliverable No. D2.1), Catalyst - Collective Applied Intelligence and Analytics for Social Innovation. The Open University, Milton Keynes.

De Liddo, A., Buckingham Shum, S., & Klein, M. (2014b). Arguing on the Web for Social Innovation: Lightweight Tools and Analytics for Civic Engagement, in 8th ISSA Conference on Argumentation. Presented at the Workshop: Arguing the Web: 2.0 , Amsterdam.

De Liddo, A., Sándor, A., & Buckingham Shum, S. (2012). Contested Collective Intelligence: Rationale, Technologies, and a Human-Machine Annotation Study. Comput. Support. Coop. Work CSCW, 21, 417-448. doi: https://doi.org/10.1007/s10606-011-9155-x

De Liddo, A., Shum, S. B., Quinto, I., Bachler, M., & Cannavacciuolo, L., (2011, February). Discourse-centric learning analytics. In Proceedings of the 1st international conference on learning analytics and knowledge (pp. 23-33). ACM.

Ferguson, R., (2012). Learning analytics: drivers, developments and challenges. International Journal of Technology Enhanced Learning, 4(5/6), 304-317.

Hair, D.C. (1991). LEGALESE: A Legal Argumentation Tool. SIGCHI Bull, 23, 71-74. doi: https://doi.org/10.1145/122672.122690

Harasim, L. (2000). Shift happens: online education as a new paradigm in learning. InternetHigh.Educ.,3, 41-61. doi: https://doi.org/10.1016/S1096-7516(00)00032-4

Klein, M. (2014). Deliberation analytics (No. D3.5), Catalyst - Collective Applied Intelligence and Analytics for Social Innovation. University of Zurich.

Klein, M., & Convertino, G. (2014). An embarrassment of riches. Commun. ACM, 57, 40-42. doi: https://doi.org/10.1145/2629560

Knight, S., & Littleton, K. (2015). Discourse- centric learning analytics: mapping the terrain. Journal of Learning Analytics, 2(1), 185-209.

Kravcik, M., Mikroyannidis, A., Pammer, V., Prilla, M., & Ullmann, T. D. (Eds.) (2017). Special Issue on: Awareness and Reflection in Technology Enhanced Learning. Int. J. Technol. Enhanc. Learn., Special Issue on: Awareness and Reflection in Technology Enhanced Learning, 9.

Lukarov, V., Chatti, M. A. & Schroeder, U. (2015). Learning Analytics Evaluation- Beyond Usability. In DeLFI WOrkshops (pp. 123-131).

Malone, T. W., & Klein, M. (2007). Harnessing Collective Intelligence to Address Global Climate Change. Innov. Technol. Gov. Glob., 2, 15-26. doi: https://doi.org/10.1162/itgg.2007.2.3.15

Malone, T. W., Laubacher, R., & Dellarocas, C. (2010). The collective intelligence genome. IEEE Eng. Manag. Rev., 38.

Mercer, N. (2004). Sociocultural discourse analysis. Analysing classroom talk as a social mode of thinking. JAL, 1, 137U00F8e168.

Nazemi, K., Ullmann, T. D., & Hornung, C. (2009). Engineering User Centered Interaction Systems for Semantic Visualizations. In C. Stephanidis (Ed.), Universal Access in Human-Computer Interaction. Addressing Diversity, Lecture Notes in Computer Science. Springer Berlin / Heidelberg, (126-134).

Novak, J. D. (1998). Learning, creating, and using knowledge. Concept MapsTM Facil. Tools Sch. Corp. Mahwaw Lawrence Erlbaum.

Parent, M.-A., Liddo, A. D., Ullmann, T. D., & Klein, M. (2015). Catalyst - Project Testbed: Argument Mapping and Deliberation Analytics (Deliverable No. D4.2b), Catalyst - Collective Applied Intelligence and Analytics for Social Innovation.

Rahwan, I., Zablith, F., & Reed, C. (2007). Laying the foundations for a World Wide Argument Web. Artif. Intell., Argumentation in Artificial Intelligence, 171, 897-921. doi: https://doi.org/10.1016/j.artint.2007.04.015

Rienties, B., Cross, S., Marsh, V., & Ullmann, T. (2017). Making sense of learner and learning Big Data: reviewing 5 years of Data Wrangling at the Open University UK. Open Learn. J. Open Distance E-Learn, 32(3), 279-293.

Rubin, J., & Chisnell, D. (2008). Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests, 2 edition. Indianapolis: Ed. Wiley.

Scheuer, O., Loll, F., Pinkwart, N., & McLaren, B. M. (2010). Computer- supported argumentation: A review of the state of the art. Int. J. Comput.-Support. Collab. Learn., 5, 43-102. doi: https://doi.org/10.1007/s11412-009-9080-x

Sie, R. L. L., Ullmann, T. D., Rajagopal, K., Cela, K., Bitter-Rijpkema, M., & Sloep, P. B. (2012). Social network analysis for technology-enhanced learning: review and future directions. Int. J. Technol. Enhanc. Learn., 4, 172-190. doi: https://doi.org/10.1504/IJTEL.2012.051582

Smith, M. A., & Fiore, A. T. (2001). Visualization Components for Persistent Conversations. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’01. ACM, New York, NY, USA, pp. 136-143. doi: https://doi.org/10.1145/365024.365073

Ullmann, T. D. (2004). maQ- Fragebogengenerator. Make a Questionnaire.

Ullmann, T. D., Liddo, A. D., & Bachler, M. (2014). Catalyst - Collective Intelligence Analytics Dashboard Usability Evaluation (Deliverable No. D4.6), Catalyst - Collective Applied Intelligence and Analytics for Social Innovation. The Open University.

Ullmann, T. D., Uren, V. S., & Nikolov, A. (2009). The SemSearchXplorer - Exploring Semantic Search Results with Semantic Visualizations. In S. Fischer, E. Maehle, R. Reischuk (Eds.), Lecture Notes in Informatics. Presented at the Informatik 2009: Im Focus das Leben, Gesellschaft für Informatik, Lübeck, pp. 3064-3076.

van Merriënboer, J. J. G., & de Bruin, A. B. H. (2014). Research Paradigms and Perspectives on Learning. In J. M. Spector, M. D. Merrill, J. Elen, M. J. Bishop (Eds.), Handbook of Research on Educational Communications and Technology. Springer New York, New York, NY, (21-29).

Verbert, K., Govaerts, S., Duval, E., Santos, J. L., Assche, F., Parra, G., & Klerkx, J. (2014). Learning Dashboards: An Overview and Future Research Opportunities. Pers. Ubiquitous Comput 18, 1499-1514. doi: https://doi.org/10.1007/s00779-013-0751-2

Author notes

1 Thomas Daniel Ullmann: Is a Lecturer at the Institute of Educational Technology at the Open University, UK. He researches especially in the area of Technology Enhanced Learning, with his background in empirical educational science, and computer science. His current focus of research is text analytics for learning. He teaches on two courses of the MA in Online and Distance Education. He develops bespoke learning analytics solutions for the university in his role as academic Data Wrangler. His detailed biography is available from http://qone.eu/ullmann.
email: t.ullmann@open.ac.uk
2 Anna De Liddo: Is a Senior Research Fellow in Collective Intelligence Infrastructures and leads the Knowledge Media Institute’s IDea Group which investigates theories methods and tools accounting for the centrality of social interaction and discourse in public engagement, urban informatics, e-democracy and social innovation contexts. Anna’s research focuses on models of dialogue and argumentation; models of crowdsourcing and participatory representation; and the design, implementation and uptake of online systems that seek to increase collective environmental awareness, and collective capacity to make sense of complex issues, such as social justice and environmental sustainability.
email: anna.deliddo@open.ac.uk
3 Michelle Bachler: Is a Software developer working primarily on collective intelligence, knowledge mapping, and blockchain technologies.
email: michelle.bachler@open.ac.uk

ANNEX

FIELD EXPERIMENT QUESTIONNAIRES

The questionnaires for the evaluation of the visual analytics dashboard for collective intelligence visualisations consists of a general part, which was the same for all questionnaires, and a specific part, which is tailored to each visualisation. This annex shows these questionnaires. To save printing space first we show the questions for all participants and afterwards we show the specific questions for each individual questionnaire. The position of the individual questions within the general questionnaire is marked. Text in square brackets has not been part of the questionnaires. It has been added to explain them.

[The part of the questionnaires that has been the same for all visualisations:]

Usefulness and usability study of the CATALYST debate visualisations

Background information

Gender:

 Male  Female

How often did you visit the discussion about ‘Designing Community 2014’ on DebateHub?

 Never

 1 time

 2 to 4 times

 5 to 10 times

 more than 10 times

How often did you make a contribution to the discussion about ‘Designing Community 2014’ on DebateHub?

 Never

 1 time

 2-5 times

 5-10 times

 more than 10 times

How familiar are you with analytics dashboards in general?

 Expert  Advanced  Average  Basic experiences  Novice

How familiar are you with visualisations for analysing and exploring data?

 Expert  Advanced  Average  Basic experiences  Novice

How familiar are you with visualisations for analysing and exploring debates?

 Expert  Advanced  Average  Basic experiences  Novice

[…

Placeholder:

This is the place for the individual parts of the questionnaire that has been different for each visualisation. See below for these sections

…]

Usability

SUS usability questionnaire

I think that I would like to use this visualisation frequently

Strongly disagree (1)      Strongly agree (5)

I found the visualisation unnecessarily complex

Strongly disagree (1) Strongly agree (5)

I thought the visualisation was easy to use

Strongly disagree (1)      Strongly agree (5)

I think that I would need the support of a technical person to be able to use this visualisation

Strongly disagree (1)      Strongly agree (5)

I found that the various functions in this visualisation were well integrated

Strongly disagree (1)      Strongly agree (5)

I thought that there was too much inconsistency in this visualisation

Strongly disagree (1)      Strongly agree (5)

I would imagine that most people would learn to use this visualisation very quickly

Strongly disagree (1)      Strongly agree (5)

I found the visualisation very awkward to use

Strongly disagree (1)      Strongly agree (5)

I felt very confident using the visualisation

Strongly disagree (1)      Strongly agree (5)

I need to learn a lot of things before I could get going with this visualisation

Strongly disagree (1)      Strongly agree (5)

Overall, I would rate the user-friendliness of this visualisation as:

 Worst Imaginable  Awful  Poor  OK  Good  Excellent

 Best Imaginable

The visualisation is responsive (loads quickly, no lag)

Strongly disagree (1)      Strongly agree (5)

[The individual parts of the questionnaires. These are the specific questions unique for each visualisation. Insert them in the placeholder mentioned in the general questionnaire above]

[Visualisation 1]

Quick overview visualisation

Please visit the visualisation by following this link: [Link to the quick overview visualisation]

It will take a short while until the visualisation is fully loaded. In the meanwhile, you might want to open the questionnaire in one window and the visualisation in another window of your browser. You now can easily switch between the visualisation and the questions of the questionnaire.

The visualisation provides an overview of important aspects of a debate.

Read the description above the visualisation. Afterwards familiarise yourself with the visualisation by trying out the points mentioned in the description.

Once you are ready proceed to the questions. You will be asked a few questions which you can answer by using this visualisation.

Short task

How many people participated in the debate?

How many counter arguments have been contributed?

What is the highest viewing count?

What is the average word count over all contributions?

[Visualisation 2]

Debate network visualisation

Please visit the visualisation by following this link: [Link to debate network visualisation]

It will take a short while until the visualisation is fully loaded. In the meanwhile, you might want to open the questionnaire in one window and the visualisation in another window of your browser. You now can easily switch between the visualisation and the questions of the questionnaire.

The visualisation shows contributions of users to a debate.

Read the description above the visualisation. Afterwards familiarise yourself with the visualisation by trying out the points mentioned in the description.

Once you are ready proceed to the questions. You will be asked a few questions which you can answer by using this visualisation.

Short task

Zoom out to see all debate networks.

Which issue received the most responses? (write down the text within the node)

How many ideas got challenged?

Which idea has the most connections? (write down the text within the node)

[Visualisation 3]

Conversation nesting visualisation

Please visit the visualisation by following this link: [Link to the conversation nesting visualisation]

It will take a short while until the visualisation is fully loaded. In the meanwhile, you might want to open the questionnaire in one window and the visualisation in another window of your browser. You now can easily switch between the visualisation and the questions of the questionnaire.

The visualisation provides an overview of the entire debate as nested circles of posts. Read the description above the visualisation. Afterwards familiarise yourself with the visualisation by trying out the points mentioned in the description.

Once you are ready proceed to the questions. You will be asked a few questions which you can answer by using this visualisation.

Short task

How many pro arguments can you see?

How many contra arguments can you see?

How many solutions do not have any pro or contra arguments?

[Visualisation 4]

User activity analysis visualisation

Please visit the visualisation by following this link: [Link to the user activity analysis visualisation]

It will take a short while until the visualisation is fully loaded. In the meanwhile, you might want to open the questionnaire in one window and the visualisation in another window of your browser. You now can easily switch between the visualisation and the questions of the questionnaire.

The visualisation shows contributions of users to a debate.

Read the description above the visualisation. Afterwards familiarise yourself with the visualisation by trying out the points mentioned in the description.

Once you are ready proceed to the questions. You will be asked a few questions which you can answer by using this visualisation.

Short task

Please reset the visualisation before working on this short task

How many counter arguments have been made in the whole debate?

How many users are very active?

How often did the most active user contribute to the debate?

How many ideas did user u1 (user on the left) add?

[Visualisation 5]

Activity analysis visualisation

Please visit the visualisation by following this link: Activity analysis visualisation

It will take a short while until the visualisation is fully loaded. In the meanwhile, you might want to open the questionnaire in one window and the visualisation in another window of your browser. You now can easily switch between the visualisation and the questions of the questionnaire.

The visualisation shows activity of a debate over time.

Read the description above the visualisation. Afterwards familiarise yourself with the visualisation by trying out the points mentioned in the description.

Once you are ready proceed to the questions. You will be asked a few questions which you can answer by using this visualisation.

Short task

Please reset the visualisation before working on this short task

What day of the week shows most activity?

What is the most frequent contribution type?

What is the most frequent activity type?

Between Thu 11 and Fri 19, how often was an idea created?

1. Annex

Usability lab session protocol

Each usability session in the usability lab followed a specific protocol, which is outlined here. The protocol was read out to the participants in order to standardise the usability session. The session protocol is based on the supplementary material of the Handbook of usability testing (Rubin & Chisnell, 2008). Our thanks especially go to Shailey Minocha from the Open University, who gave invaluable advice on the design of the usability lab session.

Ahead of the session the participants received three documents: Consent form, initial information request form and the project summary sheet. They were asked to read all three documents, and if possible to fill out the consent form and the initial information request to either bring them to the session or to send them.

All these documents and the session protocol are part of this annex.

[Project summary sheet]

Investigating the usefulness and usability of analytics visualisations of the CATALYST Learning Analytics Dashboard for Collective Intelligence [Most participants worked through the protocol of two visualisations. They have been randomly assigned to each visualisation. The protocol for each visualisation is listed below. The usability lab study also captured participants verbal responses, which are not of this study and therefore not included here to save printing space]

[Quick overview visualisation]

The visualisation provides an overview of important aspects of a debate.

Read the description above the visualisation. Afterwards familiarise yourself with the visualisation by trying out the points mentioned in the description.

Once you are ready we will proceed to the questions. You will be asked a few questions which you can answer by using this visualisation.

How many people participated in the debate?

Correct answer is: 13

Did answer correctly without help

Needed help to answer correctly (tick point below)

Did answer correctly after repeating the question

Did answer correctly after explaining the visualisation

Did answer correctly after showing the participant the interaction with the visualisation

Did answer correctly after showing the participant the solution

Where was the participant stuck:

How many people viewed the debate in the last 5 days?

Correct answer is: 0

Did answer correctly without help

Needed help to answer correctly (tick point below)

Did answer correctly after repeating the question

Did answer correctly after explaining the visualisation

Did answer correctly after showing the participant the interaction with the visualisation

Did answer correctly after showing the participant the solution

Where was the participant stuck:

How many counter arguments have been contributed?

Correct answer is: 3

Did answer correctly without help

Needed help to answer correctly (tick point below)

Did answer correctly after repeating the question

Did answer correctly after explaining the visualisation

Did answer correctly after showing the participant the interaction with the visualisation

Did answer correctly after showing the participant the solution

Where was the participant stuck:

What is the highest viewing count?

Correct answer is: 253

Did answer correctly without help

Needed help to answer correctly (tick point below)

Did answer correctly after repeating the question

Did answer correctly after explaining the visualisation

Did answer correctly after showing the participant the interaction with the visualisation

Did answer correctly after showing the participant the solution

Where was the participant stuck:

What is the average word count over all contributions?

Correct answer is: 131

Did answer correctly without help

Needed help to answer correctly (tick point below)

Did answer correctly after repeating the question

Did answer correctly after explaining the visualisation

Did answer correctly after showing the participant the interaction with the visualisation

Did answer correctly after showing the participant the solution

Where was the participant stuck:

[Debate network visualisation]

The visualisation shows contributions of users to a debate.

Read the description above the visualisation. Afterwards familiarise yourself with the visualisation by trying out the points mentioned in the description.

Once you are ready we will proceed to the questions. You will be asked a few questions which you can answer by using this visualisation.

Zoom out to see all debate networks.

Which issue received the most responses?

Correct answer is: What should the medium of our final class project be?

Did answer correctly without help

Needed help to answer correctly (tick points below)

Did answer correctly after repeating the question

Did answer correctly after explaining the visualisation

Did answer correctly after showing the participant the interaction with the visualisation

Did answer correctly after showing the participant the solution

Where was the participant stuck:

How many ideas got challenged?

Correct answer is: 3

Did answer correctly without help

Needed help to answer correctly (tick points below)

Did answer correctly after repeating the question

Did answer correctly after explaining the visualisation

Did answer correctly after showing the participant the interaction with the visualisation

Did answer correctly after showing the participant the solution

Where was the participant stuck:

Which idea has the most connections? (write down the text within the node)

Correct answer is: Book

Did answer correctly without help

Needed help to answer correctly (tick points below)

Did answer correctly after repeating the question

Did answer correctly after explaining the visualisation

Did answer correctly after showing the participant the interaction with the visualisation

Did answer correctly after showing the participant the solution

Where was the participant stuck:

[Conversation nesting visualisation]

The visualisation provides an overview of the entire debate as nested circles of posts. Read the description above the visualisation. Afterwards familiarise yourself with the visualisation by trying out the points mentioned in the description.

Once you are ready we will proceed to the questions. You will be asked a few questions which you can answer by using this visualisation.

How many pro arguments can you see?

Correct answer is: 20

Did answer correctly without help

Needed help to answer correctly (tick points below)

Did answer correctly after repeating the question

Did answer correctly after explaining the visualisation

Did answer correctly after showing the participant the interaction with the visualisation

Did answer correctly after showing the participant the solution

Where was the participant stuck:

How many contra arguments can you see?

Correct answer is: 3

Did answer correctly without help

Needed help to answer correctly (tick points below)

Did answer correctly after repeating the question

Did answer correctly after explaining the visualisation

Did answer correctly after showing the participant the interaction with the visualisation

Did answer correctly after showing the participant the solution

Where was the participant stuck:

How many solutions do not have any pro or contra arguments?

Correct answer is: 5

Did answer correctly without help

Needed help to answer correctly (tick points below)

Did answer correctly after repeating the question

Did answer correctly after explaining the visualisation

Did answer correctly after showing the participant the interaction with the visualisation

Did answer correctly after showing the participant the solution

Where was the participant stuck:

[Activity analysis visualisation]

The visualisation shows activity of a debate over time.

Read the description above the visualisation. Afterwards familiarise yourself with the visualisation by trying out the points mentioned in the description.

Once you are ready we will proceed to the questions. You will be asked a few questions which you can answer by using this visualisation.

Please reset the visualisation before working on this short task

What day of the week shows most activity?

Correct answer is: Tuesday

Did answer correctly without help

Needed help to answer correctly (tick points below)

Did answer correctly after repeating the question

Did answer correctly after explaining the visualisation

Did answer correctly after showing the participant the interaction with the visualisation

Did answer correctly after showing the participant the solution

Where was the participant stuck:

What is the most frequent contribution type?

Correct answer is: idea

Did answer correctly without help

Needed help to answer correctly (tick points below)

Did answer correctly after repeating the question

Did answer correctly after explaining the visualisation

Did answer correctly after showing the participant the interaction with the visualisation

Did answer correctly after showing the participant the solution

Where was the participant stuck:

What is the most frequent activity type?

Correct answer is: view

Did answer correctly without help

Needed help to answer correctly (tick points below)

Did answer correctly after repeating the question

Did answer correctly after explaining the visualisation

Did answer correctly after showing the participant the interaction with the visualisation

Did answer correctly after showing the participant the solution

Where was the participant stuck:

Between Thu 11 and Fri 19, how often was an idea created?

Correct answer is: 4

Did answer correctly without help

Needed help to answer correctly (tick points below)

Did answer correctly after repeating the question

Did answer correctly after explaining the visualisation

Did answer correctly after showing the participant the interaction with the visualisation

Did answer correctly after showing the participant the solution

Where was the participant stuck:

[User activity analysis visualisation]

The visualisation shows contributions of users to a debate.

Read the description above the visualisation. Afterwards familiarise yourself with the visualisation by trying out the points mentioned in the description.

Once you are ready we will proceed to the questions. You will be asked a few questions which you can answer by using this visualisation.

Please reset the visualisation before working on this short task

How many counter arguments have been made in the whole debate?

Correct answer is: 3

Did answer correctly without help

Needed help to answer correctly (tick points below)

Did answer correctly after repeating the question

Did answer correctly after explaining the visualisation

Did answer correctly after showing the participant the interaction with the visualisation

Did answer correctly after showing the participant the solution

Where was the participant stuck:

How many users are very active?

Correct answer is: 1

Did answer correctly without help

Needed help to answer correctly (tick points below)

Did answer correctly after repeating the question

Did answer correctly after explaining the visualisation

Did answer correctly after showing the participant the interaction with the visualisation

Did answer correctly after showing the participant the solution

Where was the participant stuck:

How many times did the most active user contribute to the debate?

Correct answer is: 16

Did answer correctly without help

Needed help to answer correctly (tick points below)

Did answer correctly after repeating the question

Did answer correctly after explaining the visualisation

Did answer correctly after showing the participant the interaction with the visualisation

Did answer correctly after showing the participant the solution

Where was the participant stuck:

How many ideas did user u1 (user on the left) add?

Correct answer is: 7

Did answer correctly without help

Needed help to answer correctly (tick points below)

Did answer correctly after repeating the question

Did answer correctly after explaining the visualisation

Did answer correctly after showing the participant the interaction with the visualisation

Did answer correctly after showing the participant the solution

Where was the participant stuck:

[Protocol for after the task. This is the same protocol for all participants for all visualisations]

We are interested in the usability of these visualisations. I prepared a questionnaire

[Hand out closing questions questionnaire].

Please read each statement aloud and then circle the choice that most closely matches your answer and tell me what it is.

[Capture only spontaneous reactions. Do not drill in on each question]

[Closing Usability (SUS) questionnaire]

For the following usability questions we are interested in your immediate responses. Do not think too long about each question. If you feel that you cannot respond to a particular question, please mark the centre point of this question.

I think that I would like to use this visualisation frequently

Strongly disagree (1)      Strongly agree (5)

I found the visualisation unnecessarily complex

Strongly disagree (1)      Strongly agree (5)

I thought the visualisation was easy to use

Strongly disagree (1)      Strongly agree (5)

I think that I would need the support of a technical person to be able to use this visualisation

Strongly disagree (1)      Strongly agree (5)

I found that the various functions in this visualisation were well integrated

Strongly disagree (1)      Strongly agree (5)

I thought that there was too much inconsistency in this visualisation

Strongly disagree (1)      Strongly agree (5)

I would imagine that most people would learn to use this visualisation very quickly

Strongly disagree (1)      Strongly agree (5)

I found the visualisation very awkward to use

Strongly disagree (1)      Strongly agree (5)

I felt very confident using the visualisation

Strongly disagree (1      Strongly agree (5)

I need to learn a lot of things before I could get going with this visualisation

Strongly disagree (1)      Strongly agree (5)

Overall, I would rate the user-friendliness of this visualisation as:

 Worst Imaginable  Awful  Poor  OK  Good  Excellent

 Best Imaginable