Hi Reddit, I’m Tom Smith, MD for the UK’s Data Science Campus as part of the Office for National Statistics. I have 20 years’ experience using data and analysis to improve public services and am a life-long data addict. I have a PhD in computational neuroscience and robotics, an MSc in knowledge-based systems and an MA in theoretical physics. I’m currently Chair of the Advisory Board to the United Nations Global Platform for big data & official statistics, Member of Council for the UK Royal Statistical Society, and previously chair of the Environment Agency Data Advisory Group, vice-chair of the Royal Statistical Society Official Statistics section, and a member of the Open Data User Group ministerial advisory group to Cabinet Office. Since the Campus was founded in 2017 we have been working on a huge range of projects including: - using tax returns, ship tracking data and road traffic sensor data to allow early identification of large economic changes; - exploring what internet traffic peaks and troughs can tell us about our lives; - using satellite imagery to detect surface water and assess changes over time, for rapid detection of emerging issues; - launching a hub focused on data science and AI for International Development, located at the Department for International Development (DfID), near Glasgow. - supporting ONS, government and public sector organisations to increase their data science capability. We’re aiming to have 500 trained data science practitioners for UK government by 2021. I’ll be here to talk about statistics, data and making the world a better place from 3-5pm GMT today. Proof: https://twitter.com/ONSfocus/status/1237060713140625416 Ask me anything!
Twitter is a useful medium for the exchange of ideas, but it is not well suited to a thorough exposition of complex topics. In one recent exchange, I engaged author and journalist Peter Hitchens (@clarkemicah) on the relative harms of alcohol and cannabis, a discussion he later dissected in his blog (http://dailym.ai/13Dc1Rv). Having repeatedly attempted without success to post a reply on his blog, I have elected to do so here.
Intraclass correlation (ICC) is one of the most commonly misused indicators of interrater reliability, but a simple step-by-step process will get it right. In this article, I provide a brief review of reliability theory and interrater reliability, followed by a set of practical guidelines for the calculation of ICC in SPSS.
Boyer’s framework of scholarship was published before significant growth in digital technology. As more digital products are produced by medical educators, determining their scholarly value is of increasing importance. This scoping systematic review developed a taxonomy of digital products and determined their fit within Boyer’s framework of scholarship. We conducted a broad literature search for descriptions of digital products in the medical literature in July 2013 using Medline, EMBASE, ERIC, PSYCHinfo, and Google Scholar. A framework analysis categorized each product using Boyer’s model of scholarship, while a thematic analysis defined a taxonomy of digital products. 7422 abstracts were found and 524 met inclusion criteria. Digital products mapped primarily to the scholarship of teaching (85.4%) followed by integration (7.6%), application (5.5%), and discovery (1.5%). A taxonomy of 19 categories was defined. Web-based or computer assisted learning (41%) was described most frequently. We found that digital products are well described in medical literature and fit into Boyer’s framework of scholarship and proposed a taxonomy of digital products that parallel traditional forms of the scholarship of teaching and learning. This research should inform the development of tools to examine the impact and quality of digital products.
This article is about whether the factors which drive online sharing of non-scholarly content also apply to academic journal titles. It uses Altmetric scores as a measure of online attention to articles from Frontiers in Psychology published in 2013 and 2014. Article titles with result-oriented positive framing and more interesting phrasing receive higher Altmetric scores, i.e., get more online attention. Article titles with wordplay and longer article titles receive lower Altmetric scores. This suggests that the same factors that affect how widely non-scholarly content is shared extend to academia, which has implications for how academics can make their work more likely to have more impact.
This article examines the U.S. Army’s Comprehensive Soldier Fitness (CSF) program from a scientific, ethical, and pragmatic viewpoint. CSF is one of the largest single applications of psychological research in history, intended to develop “resilience” in every U.S. Army soldier. I highlight several areas where the available information about the program either suggests the likelihood of specific problems, or is insufficient to allow the research community to evaluate the effectiveness of CSF independently of the claims made by its originators and assurances given by other non-disinterested parties. In particular, I question (a) whether a program based on resiliency training for school-aged children can hope to address the serious mental trauma, including PTSD, faced by soldiers deployed to war zones; (b) whether the instruments used to measure the performance of the program are reliable, valid, and appropriate for the circumstances in which they are being used, and (c) whether the design and delivery of the program takes sufficient account of the conflicting real-world demands placed on the individuals involved. I conclude that the program appears to have a number of potentially problematic aspects that require wider scrutiny from psychological researchers and practitioners.