AUTHOREA
Log in Sign Up Browse Preprints
BROWSE LOG IN SIGN UP

Preprints

Explore 11,927 preprints on the Authorea Preprint Repository

A preprint on Authorea can be a complete scientific manuscript submitted to a journal, an essay, a whitepaper, or a blog post. Preprints on Authorea can contain datasets, code, figures, interactive visualizations and computational notebooks.
Read more about preprints.

Why A General 45% Suicide Attempt Rate For Transgender Women Is Mathematically And O...
Hontas Farmer

Hontas Farmer

March 22, 2018
An oft repeated statistic is that 45% of transgender people attempt suicide at some time in their lives.  A simple spread sheet calculation shows that this probably can't be the case given the observed increase in the number of transgender people.  Something else must have been going on with the particular study that is often cited (and misquoted) for that statistic.   If that statistic is generalize-able to the whole transgender population then over a 50 year period the transgender population shrinks by half.\ref{921809}   This is the opposite of the observed trend.  Therefore it is mathematically impossible for that number to generalize beyond the sample in the cited study. 
 LIFE AND NATURAL SELECTION OF COMPLEX BIOCHEMICAL REACTIONS    
Minas Sakellakis

Minas Sakellakis

March 19, 2018
LIFE AND NATURAL SELECTION OF COMPLEX BIOCHEMICAL REACTIONS ABSTRACT Here we discuss the concept that life has to do with the evolution and survival of the most stable and fittest combinations of chemical reactions over time. In this case, regardless of the initial conditions, the result will be similar due to selection. Once organic chemistry comes into play, the spatial complexity of the interactions became too enormous for equillibrium. In addition, if one excludes our perspective biases (forcing us to divide into individual organisms, systems, organs) then life's reactions as a whole seems to be more about disorder than order. The final resulting reactions will appear to have survival and self-sustaining capacities but this might be more of a self-fulfilling prophesy if the observers are exactly the resulting reactions.     ARTICLE When somebody is studying the phenomenon of viruses, he can see that when viruses are not coming in contact with a host organism, they are only considered a sum of chemical compounds that do not necessarily fulfill the criteria to be considered alive. While on the other hand they start reacting with a host, or in other words they start making chemical reactions with the compounds of the host, they become alive. The same thing happens with prions, which are proteinaceous compounds that while they react with proteins of the host, they become alive in a way. So a simple chemical reaction, while happening, is the simplest form of life, or the sparkle of life. This means that the superior organisms as well as all organisms are summations of chemical reactions. What happens now when they die? There is a disorder in a system of reactions (for example brain necrosis, which means that in a large number of neural cells there is a defect in the reactions supposed to be normally happening there) that leads to a cascade of disorders in other reactions and then in others and so on. The final result is that there is a defect in the whole body, transmitted in a chain reaction way. What is the difference between a man that is alive and a man that is dead? In both cases the body is consisted from similar elements and compounds. But in the first case these compounds are reacting with each other and the structure of the body changes every moment. In the second case the chemical reactions of the body are lead to an equilibrium. The majority of scientists speculate that life was originated from a single cell, which was the first cell on earth. This composed the first thing that was a form of life. The evolution of this cell had as a result the formation of life the way we know and see today. A problem with this idea is that if we had just a single cell in earth and outside of it there was nothing, then not only this would not lead to the formation of more complicated forms of life, but this single cell soon would be dead because of lack of food.  In the beginning, life on earth was more simple than today. This means that there was a system (network) of chemical reactions that gave its place to a more complicated one, and the system was getting more and more complicated, with more reactions happening. This sounds a bit strange because if a system of chemical reactions does not get energy from outside, leads to an equilibrium state.   Question: Can systems of primordial and inorganic chemical reactions with the help of external energy avoid chemical equilibrium and go towards a constantly increasing complexity state?   If you have a large number of initial substrates and they are reacting with other bi-directly, then the number of substrates will be increasing over time. Additionally, at the time that organic molecules with different stereochemistries will be formed, then the possibility of equilibrium will be virtually vanished, as now the possible ways of molecular interactions would be greatly increased. In fact, after some time, only organic-based reactions will be present and selected, because all the others would be lost in equilibrium.   Complex organic stereochemistry doesn’t reach equillibrium state easily due to the variability of possible isoforms and thus, everytime they were created, they persisted and survived, adding to the chemical systems complexity. Additionally, every time they reacted with other organic or inorganic material (eg water, CaCO3 etc), they corrupted the other materials, adding to stereochemical complexity, and thus constantly adding novel material into the available for life chemical machinery. In a similar way that the prions corrupt the chemistry of host organisms. This constantly increases the organic stereochemical reservoir. This can in theory can undergo evolution and selection of the most sustainable chemical systems and theoretically eventually create amazingly more and more sustainable complex chemical systems such as ourselves or the other living beings.   In conclusion, we see that a perpetually increasingly complex system of organic chemicals with infinite stereochemical variations can easily be created, provided there is a source of external energy in the system. As a result of this complex system, nucleic acids will be formed (inevitably), proteins, as well as membranes. Thus, the latter are both not necessarily the starting point of life.   Question: What other forces will act on this primordial chemical system, adding to non-equillibrium and determining its fate in the long term? 1) Hydrophobicity (hydrophobic bonds, spatial configuration, separation and isolation of chemical systems, membranes, etc. 2) Another crucial factor is the property of some molecules to strongly adhere to each other, or to adhere to membranes. (In fact, if you put living cells and dead cells in a flask, then you can sort them easily because only the living ones will strongly adhere to the walls). Sticky reactions will eventually prevail and become the basis for further chemical complexity, because the chemical compounds will not diffuse around and lead to dead ends. This will make the process multifocal rather than diffuse, enhancing its ability to thrive. To see the importance of stickiness, take for instance the sponges. Recent studies has shown that they were one of the first organisms on earth, along with corals. They don’t seem quite like the other animals. In fact, I would say that they are something in between, more like random chemical systems. However, the strong adhesions between molecules (as well as multiple other factors) in sponges makes those systems sustainable over time. In fact, they were created because they were not destroyed. They can sustain themselves for millennia. The same thing happens with corals. These systems could serve as something like “chemical labs” performing chemical experiments for thousands of years before they die. Any chemical novelty that can sustain itself will survive and will be selected. 3) In a chaos of chemical reactions, those with some kind of repeatability and periodicity will have an advantage and not lead to a dead end as will be able to continue happening in the long term. 4) Also, the reactions with the ability to promote their own existence would prevail and continue to exist, in a process which is a kind of natural selection and survival of the fittest reactions. For instance, if a process can make numerous copies of critical chemical compounds then it will have an advantage because it will be continuously over-represented in the chemical system. Question: How can chemical reactions like that, which occur in a random way, lead to the formation of the structures we see and perceive as animals, plants, organisms, etc. Why don’t we just see a random soup and mixture of gasses and fluids?   If you consider life as a WHOLE (without dividing it into species organisms, etc), you get a sum of just chemical reactions. In other words, if you remove human biased concepts, such as organisms, systems, etc, then life as a whole seems to lose a lot of its order. Imagine that with the help of a source of light we cultivate in a way some chemical reactions in a small place. After a period of time, they are getting more and more complicated. Let’s hypothesize that someday the whole system becomes extremely complicated. We get to a point where we see nothing more but a mixture of colors and shapes. This is life. But human is a part of this complicated system which means that he sees things in a mirror like way, because he is in the system. He is a sum of reactions that keep happening. So it is very difficult for him to see life (the other reactions) in a fully objective way, because he is running inside the whole system. It is all a matter of perspective.   For instance, the property of reproduction in living beings that are chemical reactions seems to actually be a result of the energy that forces the chemical reactions to continue happening. Life continues because chemical reactions continue. We as an internal part of this system, see this as regeneration of the creatures, but it’s only because we are running inside the system. Living organisms normally are also not dying because the chemical reactions that are composing them are continuing to happen. If we analyze all these reactions we will have a very good view of their homeostasis and the way they sustain themselves. As we said we are seeing the world from the inside, or else in a mirror like direction, because we ourselves are a part of things, so we appreciate things from its results. We think that homeostasis and self-sustainability are very magical and sophisticated self-sustaining mechanisms, because we are the result of homeostasis, but the theory that we analyzed says that homeostasis simply is the catalogue of the chemical reactions that are still happening, and just because they keep happening, the organism is alive. In other words, we find a purpose in every single reaction or procedure, but it's only because of our perspective. There is not a certain plan in the flask full of chemicals that is favored, however the system will continue happening. The final resulting reactions will appear to have survival capacities if the observers are exactly those resulting reactions. Everything that happened leads to them. So the final combination of reactions will be the most sustainable of all combinations, given the particular conditions, because that’s exactly what happened. Those reactions prevailed in the long term.   Life as we see it is simply the result of the chemical reactions on earth. As we said, we are part of the system and we don’t realize it, but if we were alien forms of life for example, and we were watching the earth from the outer space, then we would see only a very complicated network of reactions. According to this reasoning, life seems to be more of an invention of us, or else a concept that we use to describe anything that looks like us functionally. An organism is the reactions that we see, and we think they are something amazing because we see them separately from all the other reactions that are happening in the world. We judge them from their result, which is that they become like us. We are a part of the reactions that are happening as well, and while we see organisms that look like us, we think they are independent creatures, but actually they can’t be separated from the whole soup of reactions.   Question: Ok, the basic forms of life is chemistry, but as we go higher, we find levels of organization. Functions like killing, walking, talking etc gives some reactions an advantage to survive over others. But, surviving is only important because of us. If you ask an observer outside the system of life, he will not find any organization in these functions, because their results mean nothing to them.   Question: The described system of chemical reactions is one of increasing entropy and disorder over time. But this is in contrast with our long held belief that living beings are characterized by order, and thus a lowering entropy state (see ideas of Schrodiger). If we want to examine if entropy of living beings during evolution is actually increasing or decreasing, we must abandon human-created terms such as “order”, and instead check-out for entropy changes using more objective tools and concepts such as “heat release”, etc. For instance, one might argue that for a nonliving object, such as a random stone, all the reactions of living beings are meaningless. A stone only perceives life as a whole to be a chemical disordered chaos. On the other hand, we are what we are because of some properties of these reactions. Hence, through our perspective, there is a lot of order there. Remember that previously we said that human is not a neutral objective observer of things, but he is changing together with the system. This confuses him. It means that if human entropy is raising slower than the whole living systems entropy, he will think that his entropy is lowering. One example is this: Imagine a large number of birds that are flying one next to other to the same direction. If we tell them to fly one apart from the other, so the group will start separating, the entropy of the system will start raising. Imagine also that there are three birds that are very close to each other, somewhere in the group. If they separate with less speed than the others and we consider these 3 birds as a system the systems entropy will actually lower relatively with the whole system of the birds. As we said, we are viewing the world through our eyes. This can lead to some subjectivities and misconceptions of our viewpoint, especially with respect to systems in which we are ourselves involved. We can objectively judge changes in entropy in systems we are not involved, but in a system of reactions e.g A+B->C+D+….X+Z, if the reference frame (i.e. observer) is an insider subgroup of this system (for instance K+L->M+N), and judges changes in entropy inside larger systems, then this subset can only perceive entropy changes relatively to themselves. Remember the example of the birds.   Question: Someone might say that if living beings are only a sum of complex chemical reactions then what prevents them from degrading into chemical chaos? For instance, if there is not a major adverse event or a catastrophic external factor, how can a human maintain its body structure at a viable state for nearly 100 years instead of spontaneously degrading towards a higher entropic state? A possible answer lies in our inability to fully appreciate and comprehend big numbers.   (Note: The numbers used in this comment are rough approximations. They are used as an example in order to better explain my thoughts).  Let’s assume that human body everyday degrades towards a higher entropic state. Let’s assume for this reason, that after each day, the body loses, let’s say hypothetically 100 thousand of chemical reactions. Suppose we have an 80 years old man. He has lived 29200 days. This means that he has lost or changed nearly 3 billion reactions during his lifetime. If the total amount of chemical reactions he has in his body is, let’s say 1 trillion, then after 80 years he will be composed of 997 billion reactions, which means virtually still 1 trillion. So the impact of the whole process on the chemical reaction count will be almost negligible macroscopically.       Question: How can chemical reactions like that gain or sustain their repeatability, so we see repeated patterns in life (eg. reproduction)?   Although in theory a process that can protect some repeatable reactions can evolve and be selected, another option is possible, that personally I think is more likely to be the case. And the second thing is this: Are there truly repeatable processes in nature? For instance, if a descendant is 99% the same as its ancestor, and they are both composed of 100 trillion reactions, this means they differ by 1 trillion reactions. Also, if you have two systems of 100 organic compounds with various stereochemistries that interact with each other and they become increasingly complex to the point that each system becomes 100 trillions of different compounds, then one would expect that 99% percent of the compounds of one system will be somewhat similar to the other system, only as a result of pure chance. Now if two systems of 100 trillion reactions or possible interactions are exposed to the same chemical laws and conditions (variability prevails, hydrophobic bonds, adhesive properties prevail, stable molecules prevail, influx of external substances, same temperature, etc etc, then the two systems that will be mainly composed of the same substances, will share approximately the same fate, at least to our eyes. Because if by 95% the same thing happens in both systems, this means they differ by many trillion reactions, but for us, it is enough to consider the two processes identical.
Peer review in the CiSE RR Track
Lorena A. Barba
George K. Thiruvathukal

Lorena A. Barba

and 1 more

March 14, 2018
In our editorial launching the new Reproducible Research Track in CiSE \citep*{Barba_2017} , we promised to explore innovations to the peer-review process. Because we require articles submitted to this track to adhere to practices that safeguard reproducibility, we must review for these aspects deliberately. For each submission, a reproducibility reviewer will be charged with checking availability, quality and usability of digital artifacts (data, code, figures). This reviewer (sometimes one of the track editors) will be known to the authors, and may interact with the authors during the review—for example, opening issues on a code repository. For this service, we ask that the authors recognize the reviewer in the article's acknowledgements section.
Review of Homology-directed repair of a defective glabrous gene in Arabidopsis with C...
Elsbeth Walker
dchanrod

Elsbeth Walker

and 9 more

March 12, 2018
Homology-directed repair of a defective glabrous gene in Arabidopsis with Cas9-based gene targeting  [Florian Hahn, Marion Eisenhut, Otho Mantegazza, Andreas P.M. Weber, January 5, 2018, BioRxiv] [https://doi.org/10.1101/243675]Overview and take-home messages:    Hahn et al. have compared the efficiencies of two different methods that have been previously reported to enhance the frequency of homologous recombination in plants. The paper has focused on testing a viral replicon system with two different enzymes, nuclease and nickase, as well as an in planta gene targeting (IPGT) system in  Arabidopsis thaliana. Interestingly, authors have chosen GLABROUS1 (GL1), a regulator of trichome formation, as a visual marker to detect Cas9 activity and therefore homologous recombination. A 10 bp deletion in the coding region of GL1 gene produces plants devoid of trichomes. Out of the two methods in planta gene targeting approach successfully restored trichome formation in less than 0.2% of ~2,500 plants screened, whereas the method based on viral replicon machinery did not manage to restore trichome formation at all. This manuscript is of high quality, experiments are well designed and executed. However, there are some concerns that could be addressed in the next preprint or print version. Below are some feedback and suggestions that we hope will improve the manuscript.
Molecular Mechanisms of Plant Hormone Cross Talk in Root Elongation
Aayushi Ashok Sharma

Aayushi Ashok Sharma

March 08, 2018
This review of a bioRxiv article focuses on the molecular basis of ethylene-mediated-inhibition of root elongation via the use of auxin transport proteins. The finding improves our understanding of hormone cross talk in plant development. READ ARTICLE HERE.  
Protein-protein interaction analysis of 2DE  proteomic data of  desiccation responsi...
Ryman Shoko

Ryman Shoko

March 05, 2018
AbstractA lot of research has focused on investigating  mechanisms  of vegetative desiccation tolerance in resurrection plants. Various approaches have been used to undertake such research and these include high throuput approaches such as the 'omics' - transcriptomics and metabolomics. Proteomics has since become more prefarable than transcriptomics as it it provides a view of the end-point of gene expression. However, most proteomics investigations in literature publish differentially expresses protein lists and attempt to interpret such lists in isolation. This is despite the fact that proteins do not act in isolation.  A comprehensive bioinformatics investigation can reveal more information on the desiccation tolerance mechanism of resurrection plants. In this work, a comprehensive bioinformatic analysis of the published proteomic results in  Ingle et al. (2007) was carried out. GeneMania was used to carry out protein-protein interaction studies while ClueGo was used to identify GO biological process terms.  A preliminary map of protein-protein interactions was built up and these led to the  predicted of more proteins that are likely to to be connect to the ones identified by Ingle et al. (2007).  Briefly, whereas 2DE proteomics identified 17 proteins as being differentially regulated  (4 de novo, 6 up-regulated and 7 down-regulated), GeneMania managed to add 57 more proteins  to the network (de novo - 20, up-regulated - 17 and down-regulated - 20). Each protein set has unique GO biological process terms overrepresented in it.  This study explores the protein pathways affected by desiccation stress from an interactomic prospective highlighting the importance of advanced bioinformatic analysis.   Introduction  Resurrection plants can survive extreme water loss and survive long periods in an abiotic state and upon watering, rapidly restore their normal metabolism (reviewed inter alia in  Farrant, 2007).  Understanding the mechanisms of desiccation tolerance (DT) in resurrection plants is important as they are deemed to be an excellent model to study the mechanisms associated with DT.   Proteomic profiling offers the opportunity to identify proteins that mediate the pathways involved in the DT mechanisms, when cells are subjected to desiccation stress.  A number of proteomics studies have been reported for leaves of some angiosperm resurrection plants during desiccation (Röhrig et al., 2006; Ingle et al., 2007; Jiang et al., 2007; Abdalla et al., 2010; Wang et al., 2010; Oliver et al., 2011; Abdalla and Rafudeen, 2012 etc.).  Since DT involves the integrated actions of many proteins, a systems-level understanding of experimentally derived proteomics data is essential to gain deeper insights into the protection mechanisms employed by resurrection plants against desiccation.  In recent years, an increasing emphasis has been put on integrated analysis of gene expression data via protein protein interactions (PPI), which are widely applied in interaction prediction, functional modules identification and protein function prediction. In this work, PPI analysis is applied to study the proteomics data obtained by Ingle et al. (2007) during the desiccation of Xerophyta viscosa leaves. In their study, using 2DE, they identified 17 desiccation responsive proteins(4 de novo, 6 up-regulated and 7 down-regulated). The aim of the work is to establish if the proteins in each set interact and if they do, the second aim would be to establish if there are any statistically significant GO biological process terms that can be observed in each set.   Methods Protein listsThe initial protein lists used in PPI analyses in this work were obtained from the 2DE data from Ingle et al. (2007) - (see Table 2 in  Ingle et al. (2007)).  Protein-protein integration  The Cytoscape v3.8.1  (Shannon et al., 2003)  app GeneMANIA (Warde-Farley et al., 2010), was used to derive the interactome of empirically determined and predicted PPIs of differentially regulated gene lists.  Protein lists for 'up-regulated', 'down-regulated' and 'de novo' proteins were used  as query lists for PPI studies.  Arabidopsis thaliana analogs of the desiccation responsive protein sets were used as query genes, and the program was run with default settings. GO biological process functional enrichment analysis The Cytoscape app ClueGO v2.5.7 (Bindea et al., 2009) was used for enrichment of GO biological process terms. ClueGO extracts the non-redundant biological information for groups of genes/proteins using GO terms and can conduct cluster – cluster comparisons. In the present study, for input, TAIR identifiers from the extended list of desiccation responsive proteins obtained from GeneMania were used as protein cluster lists and ontology terms were derived from A. thaliana.   The ClueGO ‘cluster comparison’ allowed the  identification of  biological process terms that were unique to each protein/gene list.
On the sources of systemic risk in cryptocurrency markets
Percy Venegas

Percy Venegas

March 04, 2018
Value in algorithmic currencies resides literally in the information content of the calculations; but given the constraints of consensus (security drivers) and the necessity for network effects (economic drivers), the definition of value extends to the multilayered structure of the network itself --that is, to the information content of the topology of the nodes in the blockchain network, and, on the complexity of the economic activity in the peripheral networks of the web, mesh-IoT, and so on. In this boundary between the information flows of the native network that serves as the substrate to the blockchain, and that of the real-world data, is where a new "fragility vector" emerges; the intensity of demand (as encoded in traffic flows) gives rise to a field, and the increase on demand affects the structure of the field, akin to a phase change. Our research question is whether factors related to market structure and design, transaction and timing cost, price formation and price discovery, information and disclosure, and market maker and investor behavior, are quantifiable to the degree that can be used to price risk in digital asset markets. The results obtained show that while in the popular discourse blockchains are considered robust and cryptocurrencies anti-fragile, the cryptocurrency markets are in fact fragile. This research is pertinent to the regulatory function of governments, that are actively seeking to advance the state of knowledge regarding systemic risk, to develop policies for crypto markets, and for investors, who are in need of expanding their understanding of market behavior beyond explicit price signals and technical analysis. 
Margin-of-Error Calculator for Interpreting Student  and Course Evaluation Data
Kenneth Royal, PhD

Kenneth Royal, PhD

February 19, 2018
OverviewAn online calculator was created to help college faculty and K-12 teachers discern the adequacy of a sample size and/or response rate when interpreting student evaluation of teaching (SETs) results. The online calculator can be accessed here: http://go.ncsu.edu/cvm-moe-calculator. About the calculator One of the most common questions consumers of course and instructor evaluations (also known as “Student Evaluations of Teaching”) ask pertains to the adequacy of a sample size and response rate. Arbitrary guidelines (e.g., 50%, 70%, etc.) that guide most interpretive frameworks are misleading and not based on empirical science. In truth, the sample size necessary to discern statistically stable measures depends on a number of factors, not the least of which is the degree to which scores deviate on average (standard deviation). As a general rule, scores that vary less (e.g., smaller standard deviations) will require a smaller sample size (and lower response rate) than scores that vary more (e.g., larger standard deviations). Traditional MOE formulas do not account for this detail, thus this MOE calculator is unique in that it computes a MOE with score variation taken into consideration. Other details about the formula also differ from traditional MOE computations (e.g., use of a t-statistic as opposed to a z-statistic, etc.) to make the formula more robust for educational scenarios in which smaller samples often are the norm. This MOE calculator is intended to help consumers of course and instructor evaluations make more informed decisions about the statistical stability of a score. It is important to clarify that the MOE calculator can only speak to issues relating to sampling quality; it cannot speak to other types of errors (e.g., measurement error stemming from instrument quality, etc.) or biases (e.g., non-response bias, etc.). Persons interested in learning more about the MOE formula, or researchers reporting MOE estimates using the calculator should read/cite the following papers: James, D. E., Schraw, G., & Kuch, F. (2015). Using the sampling margin of error to assess the interpretative validity of student evaluations of teaching. Assessment & Evaluation in Higher Education, 40(8), 1123-41. doi:10.1080/02602938.2014.972338. Royal, K. D. (2016). A guide for assessing the interpretive validity of student evaluations of teaching in medical schools. Medical Science Educator, 26(4), 711-717. doi:10.1007/s40670-016-0325-9. Royal, K. D. (2017). A guide for making valid interpretations of student evaluations of teaching (SET) results. Journal of Veterinary Medical Education, 44(2), 316-322. Doi: 10.3138/jvme.1215-201r. Interpretation guide for course and instructor evaluation results Suppose a course consists of 100 students (population size), but only 35 (sample size) students complete the course (or instructor) evaluation, resulting in a 35% response rate. The mean rating for the evaluation item “Overall quality of course” was 3.0 with a standard deviation (SD) of 0.5. Upon entering the relevant values into the Margin Of Error (MOE) calculator, we see this would result in a MOE of 0.1385 when alpha is set to .05 (95% confidence level). In order to use this information, we need to do two things: First, include the MOE value as a ± value in relation to the mean. Using the example above, we can say with 95% confidence that the mean of 3.5 could be as low as 2.8615 or as high as 3.1385 for the item “Overall quality of course”. Next, in order to understand the MOE percentage, we must first identify the length of the rating scale and its relation to the MOE size. For example, if using a 4-point scale we would use an inclusive range of 1-4, where the actual length of the scale is 3 units (e.g., distance from 1 to 2; 2 to 3; and 3 to 4). So, a 3% MOE would equate to 0.09 (e.g., 3 category units x 3.00% = 0.09). Similarly, a 5-point scale would use an inclusive range of 1-5, where the actual length of the scale is 4 units. In this case, a 3% MOE would equate to 0.12 (e.g., 4 category units x 3.00% = 0.12). Finally, we would refer to the interpretation guide (below) to make an inference about the interpretative validity of the score. In the above example the MOE for the item “Overall quality of course” was 0.1385. If we are using a 4-point scale, this value falls between 0.09 to 0.15, which corresponds to 3 to 5% of the scale (this is good!). So, we could infer the 35 students who completed the evaluation (sample) is a sufficient sample size from a course consisting of 100 students (population) to yield a statistically stable result for the item “Overall quality of course”, as the margin of error falls between ± 3-5%. Note: It is important to keep in mind that 35 students are adequate in this specific example because the scores deviated on average (standard deviation) by 0.5. If the standard deviation for the item was, say, 1.0, then 35 students would have yielded a MOE of 0.2769. This value would greatly exceed 0.15, indicating the MOE is larger than 5%, and would call into question the statistical stability of the score in this scenario. For a 4-point rating scale:*Please note the interpretation guide does not consist of rigid rules, but merely reasonable recommendations. Margin of Error Margin of Error (%) Interpretive Validity* Less than 0.09 Less than ± 3% Excellent interpretive validity Between 0.09-0.15 Between ± 3-5% Good interpretative validity Greater than 0.15 Greater than ± 5% Questionable interpretative validity; values should be interpreted with caution For a 5-point rating scale:*Please note the interpretation guide does not consist of rigid rules, but merely reasonable recommendations. Margin of Error Margin of Error (%) Interpretive Validity* Less than 0.12 Less than ± 3% Excellent interpretive validity Between 0.12–0.20 Between ± 3-5% Good interpretative validity Greater than 0.20 Greater than ± 5% Questionable interpretative validity; values should be interpreted with caution Example at NC State University:
Discussing the culture of preprints with auditory neuroscientists
Daniela Saderi, Ph.D.
Adriana Bankston

Daniela Saderi, Ph.D.

and 1 more

February 18, 2018
I started writing this memo while on an airplane, flying back from sunny San Diego. While definitely one of the highlights of the trip, the sunshine was not the reason for my visit to Southern California. Instead, I was there with hundreds of other auditory neuroscientists from all over the world to attend the 41th MidWinter Meeting of the Association for Research in Otolaryngology (ARO). 
Agate Analysis by Raman, XRF, and Hyperspectral Imaging Spectroscopy for Provenance...
Aaron J. Celestian

Aaron J. Celestian

and 7 more

February 15, 2018
AbstractThe Getty Museum recently acquired the Borghese-Windsor Cabinet (Figure \ref{620486}), a piece of furniture extensively decorated with agate, lapis lazuli, and other semi-precious stones.  The cabinet is thought to have been built around 1620 for Camillo Borghese (later Pope Paul V).  The Sixtus Cabinet, built around 1585 for Pope Sixtus V (born Felice Peretti di Montalto), is of similar design to the Borghese-Windsor and also ornately decorated with gemstones.  Although there are similarities in gemstones between the two cabinets, the Sixtus and Borghese-Windsor cabinets vary in their agate content.  It was traditionally thought that all agate gemstones acquired during the 16th and 17th centuries were sourced from the Nahe River Valley near Idar-Oberstein, Germany.  It is known that Brazilian agate began to be imported into Germany by the 1800s, but it is possible that some was imported in the 18th century or earlier.  A primary research goal was to determine if the agates in the Borghese-Windsor Cabinet are of single origin, or if they have more than one geologic provenance. Agates are made of SiO2, mostly as the mineral quartz, but also as metastable moganite.  Both quartz and moganite will crystallize together as the agate forms, but moganite is not stable at Earth's surface and will convert to quartz over tens of millions of years \cite{Moxon_2004,Peter_J_Heaney_1995,G_slason_1997}, thus relatively older agate contains less moganite.  Agate from the Idar-Oberstein is Permian in age (around 280 million years old), while agate from Rio Grande do Sul of Brazil generally formed during the Cretaceous (around 120 million years old).  It is thought that Rio Grande do Sul would have been a primary source of material exported to Europe because it is one of Brazil's oldest and largest agate gemstone producers.  Since Cretaceous agate from Brazil is many millions of years younger than Permian agate from Germany, the quartz to moganite ratios between the two localities should be quite different.  The agate gemstones of the Borghese-Windsor Cabinet cannot be removed for detailed Raman mapping experiments.    Because of this, we first analyzed multiple agate specimens from the collections of the Natural History Museum of Los Angeles (NHMLA) and the Smithsonian Institution National Museum of Natural History (NMNH) using three different techniques: Raman mapping, XRF mapping, and hyperspectral imaging. Raman spectroscopy provides an easy method to distinguish the relative quartz:moganite ratios and XRF analysis provides a measure of bulk geochemistry in agates.  Maps have advantages over line scans and point analysis in that they give a better representation of the mineral content, can be used to exclude trace mineral impurities, and yield better counting statistics and averaging.   Hyperspectral imaging provides a range of optical data from IR through UV wavelengths.   
PREreview of "Frequent lack of repressive capacity of promoter DNA methylation identi...
Hector Hernandez-Vargas

Hector Hernandez-Vargas

February 15, 2018
This is a review of the preprint "Frequent lack of repressive capacity of promoter DNA methylation identified through genome-wide epigenomic manipulation" by Ethan Edward Ford,  Matthew R. Grimmer,  Sabine Stolzenburg,  Ozren Bogdanovic,  Alex de Mendoza,  Peggy J. Farnham,  Pilar Blancafort, and  Ryan Lister.The preprint was originally posted on bioRxiv on September 20, 2017 (DOI: https://doi.org/10.1101/170506). 
The State Of Stablecoins- Why They Matter & Five Use Cases
Sheikh Mohammed Irfan
Robert Samuel Keaoakua Lin

Sheikh Mohammed Irfan

and 2 more

February 15, 2018
Price-stable cryptocurrencies, commonly referred to as stablecoins, have received a significant amount of attention recently. Much of this has been in hopes that they can fix some of the issues with cryptocurrency—most notably price instability. However, little analysis has been done with respect to the drivers and investment potential of stablecoins. Stablecoins fulfill different functions of money based on their implementation. As a result, they have unique trade-offs from one another and from physical currency (fiat) itself. Stablecoins offer a similar value proposition to fiat, but the two should not be compared on a one-to-one basis as stablecoins contain unique trade-offs and benefits. These differences will drive the demand for these tokens while enabling specific use cases. The purpose of this paper is to shed light on the adoption and the potential of market share growth for stablecoins given five selected use cases: dollarization, smart contracts, peer to peer (P2P) and peer to business (P2B payments), safe haven for exchanges, and as a reserve currency. We will discuss the opportunities within each of these use cases and assess the factors which will determine the success of stablecoins. Using insights contained in this paper, technologists can think about how best to position themselves in the short, medium, and long term. 
Emerging Countries and Trends in the World Trade Network: A Link Analysis Approach
Yash Raj Lamsal

Yash Raj Lamsal

February 14, 2018
Abstract The landscape in the world trade network has changed in last few decades.  This paper analyses World Trade Network (WTN) from 1990 to 2016, using the trade data available at the International Monetary Fund (IMF) website and presents the evolution of key players in the network using link analysis properties. Link Analysis analyzes the link strength between nodes of a network to evaluate the properties of the network. The paper uses link analysis algorithm such as PageRank, hubs, and authority to evaluate the strength or importance of nodes in the World Trade Network. A higher PageRank represents higher import dependencies, higher authority scores of a country denotes its significance to import from other hub countries, and a higher hub score indicates a country’s significance to export their final product to other authority sectors. The findings show the emergence of Asian countries, especially China, as key players in the world.  Key Words: World Trade Network, link analysis, PageRank, Authority, Hubs Introduction The value of global total export in the year 2016 is almost five (4.96) times the value in the year 1990. This fivefold growth in trade value is largely contributed by Emerging Market Economics (EMEs) \cite{Riad2012}. This indicates trade plays a vital role in the national economy as well as in the international economy. In this context, studying world trade from complex network perspectives provides meaningful insights. World Trade Network (WTN) is weighted directed complex network of countries around the world. In network science, a network is a collection of nodes and links, links are relations between the nodes and, in graph theory, a graph is a collection of vertices and edges, where edges are a relationship between vertices. Graph and Network are terms used interchangeably in this paper. For the WTN nodes are represented by the countries around the world and link represents the relationship between two countries, where the relationship is a flow of trade from one country to another. Study of the WTN applying network and graph theory framework has been growing and could be found in these works of literature \cite{Reyes2014,Deguchi2014}  \cite{Ermann2011,Benedictis2010}. This paper uses link analysis algorithms to analyze the WTN. Link analysis extracts information from a connected structure like the WTN \cite{Chakraborty}. Understanding such connected structure of trade furnish an immense source of information about the world economy, and this paper uses approaches, which was initially adopted to understand the World Wide Web (WWW) \cite{Kleinberg1999}. Link analysis methods are also used to identify the expert in Social Network \cite{Kardan2011}. This paper Link analysis algorithms HITS (Hypertext Induced Topic Search) \cite{Kleinberg1999}and PageRank \cite{Page1998} algorithms are used to find the importance of countries based on the value export amount from one country to another. HITS and PageRank are also among the most frequently cited web information retrieval algorithms (Langville & Meyer, 2005). Link Analysis of the WTN gives importance value to the countries of the WTN.  This paper study and analyze the WTN data from 1990 to 2016 as a weighted-directed network. Using the graph framework and applying link analysis perspectives, the paper tries to figure out the emerging countries and their evolution during the study period. The following section describes the link analysis algorithms used in the study and the subsequent section describes and discusses the finding.Hits Algorithm HITS algorithm is also known as hubs and authorities algorithm (Kleinberg, 1999). This algorithm gives hubs and authority ranking for each member of the network. Hubs score of a node represents the sum of the authority score of all of the nodes which are pointing to this node. The authority score represents the sum of the hub score of all nodes pointing to this node. Hubs and authorities exhibit a mutually reinforcing relationship: a good hub is a node that points to many good authorities; a good authority is a node that is pointed to by many good hubs (Kleinberg, 1999). In the WTN hubs are countries with large export value and export to good authority countries, and authority is a country with large import values and import from good hubs countries.  
DiversityNet: a collaborative benchmark for generative AI models in chemistry
Mostapha Benhenda
Esben Jannik Bjerrum

Mostapha Benhenda

and 3 more

February 08, 2018
Commenting on the document is possible without registration, but for editing, you need to:Register on Authorea: https://www.authorea.com/Join the DiversityNet group: https://www.authorea.com/inst/18886Come back hereCode: https://github.com/startcrowd/DiversityNetBlog post: https://medium.com/the-ai-lab/diversitynet-a-collaborative-benchmark-for-generative-ai-models-in-chemistry-f1b9cc669cbaTelegram chat: https://t.me/joinchat/Go4mTw0drJBrCdal0JWu1gGenerative AI models in chemistry are increasingly popular in the research community. They have applications in drug discovery and organic materials (solar cells, semi-conductors). Their goal is to generate virtual molecules with desired chemical properties (more details in this blog post). However, this flourishing literature still lacks a unified benchmark. Such benchmark would provide a common framework to evaluate and compare different generative models. Moreover, this would allow to formulate best practices for this emerging industry of ‘AI molecule generators’: how much training data is needed, for how long the model should be trained, and so on.That’s what the DiversityNet benchmark is about. DiversityNet continues the tradition of data science benchmarks, after the MoleculeNet benchmark (Stanford) for predictive models in chemistry, and the ImageNet challenge (Stanford) in computer vision.
Alternative method for modelling structural and functional behaviour of a Storage Hyd...
Carlos Graciós
Rosa María

Carlos Graciós

and 4 more

February 07, 2018
1. INTRODUCTION The relevant role of the hydroelectric plants in the world is achieved in their particular energy production because of the great amount of energy production between 30% – 60% accord of the total power generation around the world. The efficiency in terms of the high level actual requirements, depends of the correct balance on the generation, storage and distribution strategies reported in the literature. In the particular requirement of generation issues, the development of high efficient control architectures are preliminary explored by the primary scheme which is analyzed in recent results. Here, it is important to evaluate the performance of each part and whole power generation system to define the adequate law control behaviour. Acord to Lui et al. the transient process in hydropower stations, including the interactions among hydraulics, mechanism, and electricity, is complicated. The closure of guide vanes and spherical valve induces a change in the flow inertia, which causes changes in the turbine rotational speed and hydraulic pressure in the piping system.When the working condition dramatically changes during transients, drastic changes in the waterhammer pressure and high rotational speed may lead to serious accidents that will endanger the safety of the hydraulic structure and turbine unit [1–3] and affect the power grid stability [4]. Therefore, simulating the transient process of hydropower stations is necessary. The calculation accuracy is directly related to the design of the water diversion system, safe operation of the hydropower plant, and power quality. However hydropower generation varies greatly between years with varying inflows, as well as competing water uses, such as flood control, water supply, recreation, and in-stream flow requirements. Given hydropower’s economic value and its role in complex water systems, it is reasonable to monitor and protect the hydropower unit from harmful operation modes. A unit is often operated through rough zone which will cause the unit vibration and the stability performance will decline. Finally, in the case of Great Brittian 1/3 of the cfomplete electrical power is generated by a Hydropower plant installed in Dinorwig Wales with a special characteristics to be demostrated in this report Futhermore, section 2 is devoted to describe the Dinorwig Hydropower Plant(DHP) as structural as functional manner. The hybrid model proposed to define the unsual behaviour for the Plant is developed in section 3. Section 4 shows the Model obtained applying the MLD strategy inserted here. The results using the proposed method are discussed in Section 5. Finally, some conclusions are drawn in Section 6 followed by Acknowledgment and relevant references.
Plant Biology Journal Club 
Elsbeth Walker
Ahmed Ali

Elsbeth Walker

and 10 more

February 07, 2018
Medicago truncatula Zinc-Iron Permease6 provides zinc to rhizobia-infected nodule cells [Isidro Abreu, Angela Saez, Rosario Castro-Rodriguez, Viviana Escudero, Benjamin Rodriguez-Haas,  Marta Senovilla, Camille Laure, Daniel Grolimund, Manuel Tejada-Jimenez, Juan Imperial, Manuel Gonzalez-Guerrero , January 24, 2017 (preprint),  September 21, 2017 (in print), BioRxiv & Wiley-Blackwell]
Calculate tract based weighted means
Do Tromp

Do Tromp

February 05, 2018
Extracting the weighted means of individual fiber pathways can be useful when you want to quantify the microstructure of an entire white matter structure. This is specifically useful for tract based analyses, where you run statistics on specific pathways, and not the whole brain. You can read more on the distinction between tract based and voxel based analyses here: http://www.diffusion-imaging.com/2012/10/voxel-based-versus-track-based.html. The prerequisite steps to get to tract based analyses are described in the tutorials on this website: http://www.diffusion-imaging.com. In the first tutorial we covered how to processed raw diffusion images and calculate tensor images. In the second tutorial we described how to normalize a set of diffusion tensor images (DTI) and run statistics on the normalized brain images (including voxel bases analyses). In the last tutorial we demonstrated who to iteratively delineated fiber pathways of interest using anatomically defined waypoints. Here we will demonstrate and provide code examples on how to calculate a weighted mean scalar value for entire white matter tracts. The principle relies on using the density of tracts running through each voxel as a proportion of the total number of tracts in the volume to get a weighted estimate. Once you have a proportional index map for each fiber pathway of interest you can multiply this weighting factor by the value of the diffusion measure (e.g. FA) in that voxel, to get the weighted scalar value of each voxel. Shout out to Dr. Dan Grupe, who initiated and wrote the core of the weighted mean script. As a note, this script can also be used to extract cluster significance from voxel-wise statistical maps. See an example of this usage at the end of this post. The weighted mean approach allows for differential weighting of voxels within a white matter pathway that have a higher fiber count, which is most frequently observed in areas more central to the white matter tract of interest. At the same time this method will down-weigh voxels at the periphery of the tracts, areas that are often suffering from partial voluming issues, as voxels that contain white matter also overlap gray matter and/or cerebrospinal fluid (CSF). To start off you will need a NifTI-format tract file, for example as can be exported from Trackvis. See more details on how to do this in Tutorial 3. You also need scalars, like FA or MD files. Overview of software packages used in this code: TrackVis  by MGH (download TrackVis here)http://trackvis.org/docs/ fslstats  by FSL (download FSL here)http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/Fslutils fslmaths  by FSL (download FSL here)http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/Fslutils Save the weighted mean code below into a text file named “weighted_mean.sh”. Make sure the file permissions for this program are set to executable by running this line after saving. chmod 770 weighted_mean.sh Note that the code in ’weighted_mean.sh” assumes: A base directory where all folder are located with data: ${baseDir} A text file with the structures you want to run. Here again the naming is defined by what the name of the file is, and that it is located in the main directory, in a folder called STRUCTURES: ${baseDir}/STRUCTURES/${region} A text file with the scalars you want to run. The naming here is defined by how your scalar files are appended. E.g. “subj_fa.nii.gz”; in this case “fa” is the identifier of the scalar file: ${scalar_dir}/*${sub}*${scalar}.nii* The location of all the scalar files in the scalar directory : ${scalar_dir} A list of subject prefixes that you want to run. Weighted mean code: #!/bin/bash # 2013-2018 # Dan Grupe & Do Tromp if [ $# -lt 3 ] then echo echo ERROR, not enough input variables echo echo Create weighted mean for multiple subjects, for multiple structures, for multiple scalars; echo Usage: echo sh weighted_mean.sh {process_dir} {structures_text_file} {scalars_text_file} {scalar_dir} {subjects} echo eg: echo echo weighted_mean.sh /Volumes/Vol/processed_DTI/ structures_all.txt scalars_all.txt /Volumes/etc S01 S02 echo else baseDir=$1 echo "Output directory "$baseDir structures=`cat $2` echo "Structures to be run "$structures scalars=`cat $3` echo "Scalars to be run "$scalars scalar_dir=$4 echo "Directory with scalars "$scalar_dir cd ${baseDir} mkdir -p -v ${baseDir}/weighted_scalars finalLoc=${baseDir}/weighted_scalars shift 4 subject=$* echo echo ~~~Create Weighted Mean~~~; for sub in ${subject}; do cd ${baseDir}; for region in ${structures}; do img=${baseDir}/STRUCTURES/${region}; final_img=${finalLoc}/${region}_weighted; for scalar in ${scalars}; do #if [ ! -f ${final_img}_${sub}_${scalar}.nii.gz ]; #then scalar_image=${scalar_dir}/*${sub}*${scalar}.nii* #~~Calculate voxelwise weighting factor (number of tracks passing through voxel)/(total number of tracks passing through all voxels)~~ #~~First calculate total number of tracks - roundabout method because there is no 'sum' feature in fslstats~~ echo echo ~Subject: ${sub}, Region: ${region}, Scalar: ${scalar}~ totalVolume=`fslstats ${img} -V | awk '{ print $1 }'`; echo avgDensity=`fslstats ${img} -M`; avgDensity=`fslstats ${img} -M`; echo totalTracksFloat=`echo "$totalVolume * $avgDensity" | bc`; totalTracksFloat=`echo "$totalVolume * $avgDensity" | bc`; echo totalTracks=${totalTracksFloat/.*}; totalTracks=${totalTracksFloat/.*}; #~~Then divide number of tracks passing through each voxel by total number of tracks to get voxelwise weighting factor~~ echo fslmaths ${img} -div ${totalTracks} ${final_img}; fslmaths ${img} -div ${totalTracks} ${final_img}; #~~Multiply weighting factor by scalar of each voxel to get the weighted scalar value of each voxel~~ echo fslmaths ${final_img} -mul ${scalar_image} -mul 10000 ${final_img}_${sub}_${scalar}; fslmaths ${final_img} -mul ${scalar_image} -mul 10000 ${final_img}_${sub}_${scalar}; #else # echo "${region} already completed for subject ${sub}"; #fi; #~~Sum together these weighted scalar values for each voxel in the region~~ #~~Again, roundabout method because no 'sum' feature~~ echo totalVolume=`fslstats ${img} -V | awk '{ print $1 }'`; totalVolume=`fslstats ${img} -V | awk '{ print $1 }'`; echo avgWeightedScalar=`fslstats ${final_img}_${sub}_${scalar} -M`; avgWeightedScalar=`fslstats ${final_img}_${sub}_${scalar} -M`; value=`echo "${totalVolume} * ${avgWeightedScalar}"|bc`; echo ${sub}, ${region}, ${scalar}, ${value} >> ${final_img}_output.txt; echo ${sub}, ${region}, ${scalar}, ${value}; #~~ Remember to divide final output by 10000 ~~ #~~ and tr also by 3 ~~ rm -f ${final_img}_${sub}_${scalar}*.nii.gz done; done; done; fi Once the weighted mean program is saved in a file you can start running code to run groups of subjects. See for example the script below. Run Weighted Mean for group of subjects: #Calculate weighted means: echo fa tr ad rd > scalars_all.txt echo CING_L CING_R UNC_L UNC_R > structures_all.txt sh /Volumes/Vol/processed_DTI/SCRIPTS/weighted_mean.sh /Volumes/Vol/processed_DTI/ structures_all.txt scalars_all.txt /Volumes/Vol/processed_DTI/scalars S01 S02 S03 S04; Once that finishes running you can organize the output data, and divide the output values by 10000. This is necessary because to make sure the output values in the weighted mean code have sufficient number of decimals they are multiplied by 10000. Furthermore, this code will also divide trace (TR) values by 3 to get the appropriate value of mean diffusivity (MD = TR/3). Organize output data: cd /Volumes/Vol/processed_DTI/weighted_scalars for scalar in fa tr ad rd; do for structure in CING_L CING_R UNC_L UNC_R; do rm -f ${structure}_${scalar}_merge.txt; echo "Subject">>subject${scalar}${structure}.txt; echo ${structure}_${scalar} >> ${structure}_${scalar}_merge.txt; for subject in S01 S02 S03 S04; do echo ${subject}>>subject${scalar}${structure}.txt; if [ "${scalar}" == "tr" ] then var=`cat *_weighted_output.txt | grep ${subject}|grep ${structure}|grep ${scalar}|awk 'BEGIN{FS=" "}{print $4}'`; value=`bc <<<"scale=8; $var / 30000"`;echo $value >> ${structure}_${scalar}_merge.txt; else var=`cat *_weighted_output.txt | grep ${subject}|grep ${structure}|grep ${scalar}|awk 'BEGIN{FS=" "}{print $4}'`; value=`bc <<<"scale=8; $var / 10000"`;echo $value >> ${structure}_${scalar}_merge.txt; fi done mv subject${scalar}${structure}.txt subject.txt; cat ${structure}_${scalar}_merge.txt; done done #Print data to text file and screen rm all_weighted_output_organized.txt paste subject.txt *_merge.txt > all_weighted_output_organized.txt cat all_weighted_output_organized.txt This should provide you with a text file with columns for each structure & scalar combination, with rows for each subject. You can then export this to your favorite statistical processing software. Finally as promised. Code to extract significant cluster from whole brain voxel-wise statistics, in this case from FSL’s Randomise output:Extract binary files for each significant clusters: #Extract Cluster values for all significant maps dir=/Volumes/Vol/processed_DTI/ cd $dir; rm modality_index.txt; for study in DTI/STUDY_randomise_out DTI/STUDY_randomise_out2; do prefix=`echo $study|awk 'BEGIN{FS="randomise_"}{print $2}'`; cluster -i ${study}_tfce_corrp_tstat1 -t 0.95 -c ${study}_tstat1 --oindex=${study}_cluster_index1; cluster -i ${study}_tfce_corrp_tstat2 -t 0.95 -c ${study}_tstat2 --oindex=${study}_cluster_index2; num1=`fslstats ${study}_cluster_index1.nii.gz -R|awk 'BEGIN{FS=" "}{print $2}'|awk 'BEGIN{FS="."}{print $1}'`; num2=`fslstats ${study}_cluster_index2.nii.gz -R|awk 'BEGIN{FS=" "}{print $2}'|awk 'BEGIN{FS="."}{print $1}'`;  echo $prefix"," $num1 "," $num2; echo $prefix"," $num1 "," $num2>>modality_index.txt; #loop through significant clusters count=1 while [ $count -le $num1 ]; do fslmaths ${study}_cluster_index1.nii.gz -thr $count -uthr $count -bin /Volumes/Vol/processed_DTI/STRUCTURES/${prefix}_${count}_neg.nii.gz; let count=count+1 done count=1 while [ $count -le $num2 ]; do fslmaths ${study}_cluster_index2.nii.gz -thr $count -uthr $count -bin /Volumes/Vol/processed_DTI/STRUCTURES/${prefix}_${count}_pos.nii.gz; let count=count+1 done done Extract cluster means: #Extract TFCE cluster means rm -f *_weighted.nii.gz rm -f *_weighted_output.txt; rm -f *_merge.txt; cd /Volumes/Vol/processed_DTI; for i in DTI/STUDY_randomise_out DTI/STUDY_randomise_out2; do prefix=`echo ${i}|awk 'BEGIN{FS="/"}{print $1}'`; suffix=`echo ${i}|awk 'BEGIN{FS="/"}{print $2}'`; rm structures_all.txt; cd /Volumes/Vol/processed_DTI/STRUCTURES; for j in `ls ${prefix}_${suffix}*`; do pre=`echo ${j}|awk 'BEGIN{FS=".nii"}{print $1}'`; echo $pre >> /Volumes/Vol/processed_DTI/structures_all.txt; done cd /Volumes/Vol5/processed_DTI/NOMOM2/TEMPLATE/normalize_2017; rm scalars_all.txt; echo $suffix > scalars_all.txt; sh ./weighted_mean.sh /Volumes/Vol/processed_DTI/ structures_all.txt scalars_all.txt /Volumes/Vol/processed_DTI/${prefix}/${suffix} S01 S02 S03 S04; done Finally, run the previous code to organize the output data. This code has now helped you extract weighted mean values for your selected pathways and clusters. You can now import these files into CSV files and run statistics in your favorite statistical package (e.g. python on R). 
What changes emerge when translating feminist literature from English into Polish?  ...
Monika Andrzejewska

Monika Andrzejewska

February 01, 2018
Abstract The aim of this essay is to investigate to what extent gender matters in translation. The discussion centres on the translation of English feminist writings into Polish: „A Room of One’s Own”, „Orlando” and „Written on the Body”. Unlike English, Polish is a highly inflected language, which requires gendered choices in the language used to describe characters. Thus, there is a risk that the translation may distort the original meaning of the whole text. I will begin by introducing some concepts of feminist theory of translation, which draw attention to gender issues. Then, I will analyse the Polish translations of books in question. The main argument of this essay is that because translating sexual ambiguity into Polish is impossible, feminist translators may take it to their advantage to transfer their own attitudes. This ultimately may shape the overall perception of the book and the author by a given readership.
THE TRUTH IS IN THE SOUL OF BEHOLDER - Silence
Igor Korosec

Igor Korosec

February 01, 2018
A document by Igor Korosec, written on Authorea.
Why we should use balances and machine learning to diagnose ionomes
Serge-Étienne Parent

Serge-Étienne Parent

January 24, 2018
The performance of a plant can be predicted from its ionome (concentration of elements in a living tissue) at a specific growth stage. Diagnoses have yet been based on simple statistical tools by relating a Boolean index to a vector of nutrient concentrations or to unstructured sets of nutrient ratios. We are now aware that compositional data such as nutrient concentrations should be carefully preprocessed before statistical modeling. Projecting concentrations to isometric log-ratios confer a Euclidean space to compositional data, similar to geographic coordinates. By comparing projected nutrient profiles to a geographical map, this perspective paper shows why univariate ranges and ellipsoids are less accurate to assess the nutrient status of a plant from its ionome compared to machine learning models. I propose an imbalance index defined as the Aitchison distance between an imbalanced specimen to the closest balanced point or region in a reference data set. I also propose and raise some limitations of a recommendation system where the ionome of a specimen is translated to its closest point or region where high plant performance is reported. The approach is applied to a data set comprising macro- and oligo-elements measured in blueberry leaves from Québec, Canada.
From the bench to a grander vision
Adriana Bankston

Adriana Bankston

January 20, 2018
As a kid, I was always very diligent in school and took it very seriously. As I was also curious and enjoyed a challenge, science was a good field for me to pursue. Plus, I grew up in a family of scientists, with both my parents and grandparents doing it. But that didn’t necessarily mean I knew how academia worked.  I moved to the U.S. after high school, graduated from college (with a B.S. from Clemson University), and attended graduate school at Emory University. While I had good grades and test scores, I still had a lot to learn about doing research in spite of having worked in a lab for one year prior to graduate school. But I knew that I enjoyed the bench work enough to pursue a graduate education, and I wanted to learn the scientific way of thinking. I had a really excellent graduate mentor (also female) who taught me everything I know about science. She taught me how to design experiments and interpret data, and pointed out when I was doing things wrong. She always pushed me to do better in multiple aspects of being a scientist, and taught me to speak up when I had a question or a thought, no matter how small it might have been. This ultimately allowed me to become more confident in my abilities as a scientist. She also managed work-life balance extremely well, which was really inspiring to see and proved to be very useful for me later. Overall, she was an amazing mentor and role model. Graduate school was pretty comfortable. I wasn't eligible to apply for many fellowships (at least not until I obtained my U.S. citizenship), but luckily the lab was well funded during my time there, which alleviated some pressure. I didn’t seek additional mentors because I felt that her guidance could point me in the right direction, which, at the time, was still an academic career. I also didn't really consider other career options during this time - if I did, I probably would have approached my scientific training differently. During my postdoctoral training, I started exploring other careers, although academia was still on the table. Many changes took place in my life during this time, which allowed me to mature in several ways. I still carried with me the confidence I had gained during graduate school, which materialized into wanting to become a leader in my field of choice. But while examining potential careers, I also kept an open mind. I attended my first national meeting related to postdoctoral issues (but unrelated to my bench research), which peaked my interest in this area. Together with another postdoc at the university, I subsequently established a career seminar series as a resource for postdocs to hear from professionals in non-academic careers. While I didn’t realize this at the time, the seminar had the potential to change the local academic culture. Trainees came up to me and thanked me for creating this resource, which made me feel good in so many ways. At some point I noticed that some of them were regularly attending the events, and also seemed to be asking more questions and interact more frequently with some of the speakers following their talks. This was a great experience. After that, I organized regional symposia to connect trainees to each other, and got involved with national organizations focused on training and policy for graduate students and postdocs. During this time, I began to network with experts in these areas, and to speak up about certain issues in academia. As I participated in more of these activities on the side of my postdoctoral work, I eventually decided to follow these strong interests that I was developing instead of trying to stay in academia. So, I quit my postdoc and continued to explore what I was really interested in doing, but now with a slightly more clear direction. As luck would have it, I obtained a travel award to attend a science advocacy meeting in Boston (organized by Future of Research and other groups), which interestingly took place during my last month as a postdoc. That meeting got me hooked on studying academia and advocating for scientists, although my interests were fairly broad at that point. But these topics seemed to fit me like a glove, and I knew that I had to get more involved with the group.The rest is history. At the Future of Research, I was fortunate enough to be involved early on with a project on tracking postdoc salaries nationally, which isn't something I ever imagined myself doing but I loved it. This experience also opened me up to the idea of trying new things and going with the flow, instead of planning my next move in detail as I had always done. Overtime, this project gave me a sense of purpose and direction while still figuring out my path. And no matter what else I did during this time, I always came back to that feeling of passion that I had developed for trying to create evidence-based change in academia, while advocating for transparency in the system. I was a bit surprised to see how naturally these ideas came to me, as I never knew that you could study something like this; nevertheless, I found it extremely fascinating. I later reflected upon why it was so easy for me to engage in this area, and realized that it essentially blended multiple aspects of my personality: 1) an interest in doing research with a purpose; 2) the feeling that I am making a difference with my work; 3) speaking up for a particular cause and backing it up with data; and 4) I had always been a bit of rebel, which worked well for wanting to challenge the status quo.  I finally felt that my life had a purpose and direction that I was happy to pursue. Without going into details about my contributions (see more on my website), volunteering for a cause I believe in (and knowing what that is) has been a very powerful motivator for engaging in this type of work. In this context, taking ownership of science policy projects and leading them has been a very fulfilling experience. I am now on the Future of Research Board of Directors, which I feel is the ideal leadership position for me. In some ways, this is the opportunity I had been waiting for all this time, I just didn't know it, and obviously couldn’t have predicted it.I’m very grateful to this group for making my opinion feel valued and my voice count during a time when I wasn’t quite sure where I was going. I now know the direction I want my life to take, which is quite amazing in itself. I also know that just having a job isn’t sufficient for me without contributing to a grander vision and the potential to make the world a better place. And while I am still looking for a position in this area, I am now aware of the fact that I am much more motivated by a mission (than by money). I wouldn't have realized that if it weren't for my experience with Future of Research. Some of the lessons I’ve learned along the way are: 1) Don’t let anyone tell you how to live your life; 2) Volunteering can pay off if you are truly invested in it; and 3) Gratitude is a good way to live your life in general. As I try to keep these lessons in mind moving forward, perhaps the biggest one is still that taking some time to discover what is truly important to me will be a worthwhile long-term investment in my future.
Medical Students Fail Blood Pressure Measurement Challenge: Implications for Measurem...
Kenneth Royal, PhD

Kenneth Royal, PhD

January 19, 2018
Rakotz and colleagues (2017) recently published a paper describing a blood pressure (BP) challenge presented to 159 medical students representing 37 states at the American Medical Association’s House of Delegates Meeting in June 2015. The challenge consisted of correctly performing all 11 elements involved in a BP assessment using simulated patients. Alarmingly, only 1 of the 159 (0.63 %) medical students correctly performed all 11 elements. According to professional guidelines (Bickley & Szilagyi, 2013; and Pickering et al, 2005), the 11 steps involved in a proper BP assessment include: 1) allowing the patient to rest for 5 minutes before taking the measurement; 2) ensuring patient’s legs are uncrossed; 3) ensuring the patient’s feet are flat on the floor; 4) ensuring the patient’s arm is supported; 5) ensuring the sphygmomanometer’s cuff size is correct; 6) properly positing cuff over bare arm; 7) no talking; 8) ensuring the patient does not use his/her cell phone during the reading; 9) taking BP measurements in both arms; 10) identifying the arm with the higher reading as being clinically more important; and 11) identifying the correct arm to use when performing future BP assessment (the one with the higher measurement). All medical students involved in the study had confirmed that they had previously received training during medical school for measuring blood pressure. Further, because additional skills are necessary when using a manual sphygmomanometer, the authors of the study elected to provide all students with an automated device in order to remove students’ ability to use the auscultatory method correctly from the testing process. The authors of the study reported the average number of elements correctly performed was 4.1 (no SD was reported). While the results from this study likely will raise concern among the general public, scholars and practitioners of measurement may also find these results particularly troubling. There currently exists an enormous literature regarding blood pressure measurements. In fact, there are even academic journals devoted entirely to the study of blood pressure measurements (e.g., Blood Pressure Monitoring), and numerous medical journals devoted to the study of blood pressure (e.g., Blood Pressure, Hypertension, Integrated Blood Pressure Control, Kidney & Blood Pressure Research, High Blood Pressure & Cardiovascular Prevention, etc.) Further, a considerable body of literature also discusses the many BP instruments and methods available for collecting readings, and various statistical algorithms used to improve the precision of BP measurements. Yet, despite all the technological advances and sophisticated instruments available, these tools likely are of only limited utility until health care professionals utilize them correctly. Inappropriate inferences about BP readings could result in unintended consequences that jeopardize a patient’s health. In fact, research (Chobanian et al, 2003) indicates most human errors when measuring BP result in higher readings. Therefore, these costly errors may result in misclassifying prehypertension as stage 1 hypertension and beginning a treatment program that may be both unnecessary and harmful to a patient. This problem is further exacerbated when physicians put a patient on high blood pressure medication, as most physicians are extremely reluctant to take a patient off the medication, as the risks associated with stopping are extremely high. Further, continued usage of poor BP measurement techniques could result in patients whose blood pressure is under control to appear uncontrolled, thus escalating therapy that could further harm a patient. Until physicians can obtain accurate BP measurements, it is unlikely they can accurately differentiate those individuals who may need treatment from those that do not. So, I wish to ask the measurement community how we might assist healthcare professionals (and those responsible for their training) to correctly practice proper blood pressure measurement techniques? What lessons from psychometrics can parlay into the everyday practice of healthcare providers? Contributing practical solutions to this problem could go a long way in directly improving patient health and outcomes. References Pickering T, Hall JE, Appel LJ, et al. Recommendations for blood pressure measurement in humans and experimental animals part 1: blood pressure measurement in humans – a statement for professionals from the Subcommittee of Professional and Public Education of the American Heart Association Council on High Blood Pressure Research. Hypertension. 2005;45:142‐161. Bickley LS, Szilagyi PG. Beginning the physical examination: general survey, vital signs and pain. In: Bickley LS, Szilagyi PG, eds. Bates’ Guide to Physical Examination and History Taking, 11th ed. Philadelphia, PA: Wolters Kluwer Health/ Lippincott Williams and Wilkins; 2013:119‐134. Chobanian AV, Bakris GL, Black HR, et al. Seventh report of the Joint National Committee on prevention, detection, evaluation and treatment of high blood pressure. Hypertension. 2003;42:1206‐1252. Rakotz MK, Townsend RR, Yang J, et al. Medical students and measuring blood pressure: Results from the American Medical Association Blood Pressure Check Challenge. Journal of Clinical Hypertension. 2017;19:614–619.
Trust Asymmetry
Percy Venegas

Percy Venegas

January 18, 2018
In the traditional financial sector, players profited from information asymmetries. In the blockchain financial system, they profit from trust asymmetries. Transactions are a flow, trust is a stock. Even if the information asymmetries across the medium of exchange are close to zero (as it is expected in a decentralized financial system), there exists a “trust imbalance” in the perimeter. This fluid dynamic follows Hayek's concept of monetary policy: “What we find is rather a continuum in which objects of various degrees of liquidity, or with values which can fluctuate independently of each other, shade into each other in the degree to which they function as money”. Trust-enabling structures are derived using Evolutionary Computing and Topological Data Analysis; trust dynamics are rendered using Fields Finance and the modeling of mass and information flows of Forrester's System Dynamics methodology. Since the levels of trust are computed from the rates of information flows (attention and transactions), trust asymmetries might be viewed as a particular case of information asymmetries -- albeit one in which hidden information can be accessed, of the sort that neither price nor on-chain data can provide. The key discovery is the existence of a “belief consensus” with trust metrics as the possible fundamental source of intrinsic value in digital assets. This research is relevant to policymakers, investors, and businesses operating in the real economy, who are looking to understand the structure and dynamics of digital asset-based financial systems. Its contributions are also applicable to any socio-technical system of value-based attention flows.
Integritas Panitia Tarung Bebas
Saortua Marbun

Saortua Marbun

January 18, 2018
Integritas Panitia Tarung Bebas Saortua Marbun\citep{marbun2018}saortuam@gmail.com | http://orcid.org/0000-0003-1521-7694  DOI: 10.22541/au.151624089.92438669 ©2018 Saortua Marbun 
← Previous 1 2 … 484 485 486 487 488 489 490 491 492 … 496 497 Next →
Authorea
  • Home
  • About
  • Product
  • Preprints
  • Pricing
  • Blog
  • Twitter
  • Help
  • Terms