The article "Exemplifying Computational Thinking Scenarios in the Age of COVID-19: Examining the Pandemic’s Effects in a Project-Based MOOC" describes a project-based MOOC designed to teach a wide variety of STEM students "computational thinking."  This course was being conducted during the initial outbreak of COVID-19 and thus has some really interesting data relating to the uptake in student enrollment as well as a focus on students choosing projects based on COVID-19.  In addition to the COVID-19 data, I found two aspects of the article to be particularly interesting.  First, the article describes teaching a "computational thinking" course without teaching programming.  This is the first time I have seen this type of idea implemented in an entire course.  I am very interested in learning the pros and cons to teaching "Computational Thinking" in this way.   The second thing I noticed about this article is that the course is being taught as a project-based MOOC to around 1000 students.  This seems like a lot of work and I would be curious to learn more about how the course was scaled.  In particular, how were that many projects organized and graded (peer grading, team grading something else)?  What other things did the instructors do to help scaling?Overall, I thought this article was well written and is a great addition to the COVID-19 special issue.  Although I would like to see more details about no-programming "Computational Thinking" and scaling project based MOOCs, I think the authors made a reasonable discussion leaving these topics out given the length limitation.Unfortunately, it looks to me like even more needs to be cut.  There is a 3000-word limit for CiSE department articles, with each figure/table counting as 250 words.  Given that, we need to work with the authors to try and cut another 1300 words form the current draft.  
The article on "The HPC Certification Forum" describes the goals and objectives behind a community effort to build a testing and certification system for HPC (high-performance computing) skills.  As the profession of  Research Facilitation grows, there is a clear benefit to having a common testing and certification framework to help define the technical skills needed in the profession. The article motivates the need for this system and encourages further reading in two referenced articles.  Some noticeable aspects of the approach laid out in the article include:This is a volunteer and community driven effort.The authors of the article come from academia and not commercial training,  which lends credibility to the approach.  This is a global effort with contributors in multiple countries. The testing/certification is hierarchical and can be viewed as a smorgasbord of topics that can be tailored to different situations and goals (instead of dictating a one-size-fits-all).Although a good overview, a lack of details in the article makes it difficult for the reader to know exactly what is intended.  For example, when reading the article I found myself asking these general question:Although a few applications were mentioned, the primary audience for the  testing/certification system is unclear. Is it research facilitators?  HPC Users?  System administrators? Software Developers? It may be "all-of-the-above" and the hierarchical nature of the testing is intended to cover everything. However, I have to believe this would make the material unfocused and difficult to categorize. What qualifies as HPC is still an open and often debated question.   How are decisions made as to what should be included and what should be excluded?  How are priorities set?  For example, I did a quick review of the website and noticed there was no reference to High Throughput Computing (HTC), is this intentional?  Is this not HPC?  Who makes that decision?Is there any attempt to define/use a pedagogical framework for building or evaluating the individual assessments?  Quite a bit of research has gone into how to build effective evaluations and it seems shortsighted to build a testing framework without considering this body of work. As a volunteer organization it must be difficult to make final decisions.  How is the group organized?  There is a governance tab on the website with a mention of 'voting rights' but it is unclear how decisions are made.   Is there a benevolent dictator?  A type of peer-review process?  Is everything up to popular vote?  What happens when I want my highly specialized question or topic added to the hierarchy?  What is being done to prevent chaotic growth of the materials?  What is being done to avoid bias?   This is not a one and done project.  I am concerned about the longevity of this approach.  This seems to be a labor of love but I don't see any indication of a long-term strategy.  How is the project funded?   As a volunteer organization only, what efforts are being made to connect with other organizations (ACM, XSEDE, PRACE) to help ensure continuity and growth.  As this article is an introduction to the project and lacks a lot of details, leaving the reader with a lot of questions could be both good and bad.    I think part of the goal of the article is to pique the reader's interest and provide just enough information to consider going out and learning more.  However,  lack of details may leave the reader frustrated.  The following are some suggestions to improve the article: Change the subtitle from "Community-Lead" to "Community-Led"Describe the current state of the project and give a timeline.  How far along is the project? For example, maybe state when version 1.0 of the tests will be available (or some similar metric).  This information is unclear from the article and also unclear from the website.  A description and references to the pedagogical approach to question writing  and selection (if there is one).A clear definition of what it means to be an HPC "practitioner".  Even if the definition is intended to be open, this needs to be clear to the reader.  More examples would be very helpful.  A few sections of the hierarchy were explained but I would like to have seen some example questions.A link to the website in the list of references.