For a full list of publications, please see my CV.
Note: * indicates a student that I (co-)advised.
Methods: wireframes, prototypes, usability inspections, heuristic evaluations, user testing (lab-based evaluation), think-aloud protocol, semi-structured interviews, log analysis
Summary: We published a paper at ACM CSCW 2019 that describes the GroundTruth system, our methods, and findings, you can read it here. We also created a short video that you can watch here.
I led the design and development of a crowdsourcing web tool called GroundTruth to help expert investigators in fields like journalism and human rights advocacy find where photos are located. This task, known as image geolocation, is time consuming and can lead to fatigue. However, crowds may potentially help scale up and speed up experts’ work practice. The design process took 18 months and involved creating wireframes, prototypes, usability inspections, and heuristic evaluations.
I then designed and conducted a mixed-methods evaluation (user testing) of GroundTruth with 11 expert investigators, who worked with 567 crowd workers in real time, to geolocate images. The findings were published at a top-tier HCI conference, here, and have also informed the design of new features. This work has also been featured in the media.
After our evaluation and adding several new features that were surfaced in the evaluation, we deployed GroundTruth in the wild with investigators at Storyful, The New York Times, and Bellingcat. This deployment is currently ongoing.
Methods: participant observation, surveys, audio diaries, semi-structured interviews, focus groups
Summary: I, along with a team mate, conducted fieldwork at CrimeCon’s CrowdSolve 2019 event in Seattle, WA, spanning 4 days with 250+ attendees. This consisted of recruiting participants, administering surveys, conducting semi-structured interviews with 24 people, focus groups with 15+ people, audio diary entries, and participant observation.
We found that participants had a variety of motivations for participating, and there are several challenges to conducting similar events in the future. We also noticed several areas where technology and social media could help improve this process.
Methods: requirements gathering, surveys, stakeholder interviews, site visits, storyboarding, sketching, wireframing, prototyping, personas, affinity diagramming, UX inspection, heuristic evaluation, user testing.
Summary: Blacksburg Transit is a public transportation provider for Virginia Tech, Blacksburg and Christiansburg, making 1.3 million trips annually. Their primary mobile application is the BT4U mobile application. The primary objective of this project was to improve and redesign the BT4U mobile application to target the existing limitations and improve the overall user experience (UX).
The project contained a contextual inquiry to gather general feedback regarding the BT4U mobile application. As a result of interviews and survey feedback, we learned users felt the mobile application had certain limitations such as poor user interface, lack of personalization options and slow performance. Many also complained about the lack of push notifications. The insights from the inquiry laid down a solid foundation for the succeeding stages of the project. Subsequently, we had regular interactions with our clients, i.e. Project Manager (Tim Whitten), Communications & Customer support specialist (Fiona Rhodes) & the Automation Creation, Inc. design team. These interactions were instrumental in narrowing down the scope of the project to a subset of major UX shortcomings. Analyzing the raw work data collected in the inquiry, the team generated work roles, work activity affinity diagram,, system models, and user personas. The team then moved towards ideation and sketching, with a focus on: (i) Bus capacity, (ii) Personalization options, (iii) Plan a Trip, (iv) Weather and (v) Push Notifications . Thereafter, we focussed on more advanced stages of UX development such as Storyboarding, wireframing and design of prototypes. High fidelity prototypes were developed using Axure RP 8. User and client inspections and evaluations of the prototypes were conducted and their feedback were used for incremental refinements of the prototypes.
The prototype introduced by the team provides the framework for future application implementations and offers UX solutions to many of the problems identified during the contextual inquiry. Further, areas of future improvement were identified, baseline usability scores were established, and specific feedback was gathered at the request of the client. The usability solutions developed and recommended by the team were a result of a structured user experience process that improved the application design, functionality, customization, and ease of use.
Methods: experiment design, statistical analysis
Summary: Fake news poses a danger to today’s society. It has negatively affected public health and safety, reduced public trust in news media, and even has the potential to destabilize governments. Currently, social media and news aggregator services - such as such as Facebook, Twitter, and Google - employ algorithmic methods to prevent the spread of fake news, but often fail to do so. These corporations have thus resorted to manual verification by crowds of humans, who, in their evaluations, may not be free from their own inherent forms of bias. This paper explores one aspect of fact-checking: how perceptions of a journalist’s social media profile affects the evaluation of content created by them. I performed a 2x2 betweensubject study on Amazon Mechanical Turk and find that raters’ credibility evaluations are not affected by the credibility of the source’s social media profile. I also find that their evaluations are free from racial bias. I discuss possible interpretations of my findings and future work towards better understanding these results.
🏆This work won the Best Poster/Demo Award at AAAI HCOMP 2019.
Methods: experiment design, statistical analysis
Summary: Crowdsourced labeling of political social media content is an area of increasing interest, due to the contextual nature of political content. However, there are substantial risks of human biases causing data to be labelled incorrectly, possibly advantaging certain political groups over others. Inspired by the social computing theory of social translucence and findings from social psychology, we built PairWise, a system designed to facilitate interpersonal accountability and help mitigate biases in political content labelling.
CrowdTruth is a community-oriented site that helps crowds debunk fake news, while promoting organized, efficient, and evidence-based dialogue around various news topics. In the future, CrowdTruth will include a set of modules to train people in improving their evaluative, investigative, and argumentation skills.