Journals and Reviews
This section is for those interested in having additional knowledge concerning manuscripts, journals and the communication channels associated to academic publications. There are several journals, and you must decide which one is the suitable one for your publication, or for retrieving those papers interesting for your research. In addition, there are journals and journal practices that must be avoided as are not ethical. There are some ideas about how journals are evaluated, how prestige and reputation play an important role as a mechanism of control against malpractice, and what are the main roles of the editors, associate edictors, reviewers, assistants, etc... in the process. Other aspects of academic publising are also reviewed: Who should be coauthor in a paper? How to select reviewers? What to do after a paper is rejected?
What is a predatory journal? From wikipedia: "Predatory publishing (...) is an exploitive academic publishing business model that involves charging publication fees to authors without checking articles for quality and legitimacy and without providing the other editorial and publishing services that legitimate academic journals provide, whether open access or not. They are regarded as predatory because scholars are tricked into publishing with them, although some authors may be aware that the journal is poor quality or even fraudulent.". International audited SCI journals are generally free of predatory measures. In Finland, the JUFO list aims to have journals that fulfill fair publishing, although it fails sometimes. There was an initiative to list all the predatory journals and publisher books some years ago, although it was subject of many lawsuits.
Finnish Publication Forum (Jufo)
Bealls list of predatory journals
The right journal
Selecting the best journal for your research is a very important step and must be planned carefully. Some journals carry a lot of prestige and having publications there can help your career. At the same time, you should evaluate the audience of these journals: if you write a paper in the wrong journal, even if it is eventually accepted and published, perhaps the audience will not understand it and will have no interest on it. An easy rule to select the journal, would be to check your own references once your paper is written. Those journals you are already citing often are probably relevant for your own paper. You can get through this process (as usual) with experience. The more familiar you get with journals, their structure, layout, audience, editors, topics... the easier this step will be. Of course, whichever the options, you should aim to audited journals (see above). Additional criteria must be the fees, open access policy, average time that they take to respond / accept the paper... The fees may be substantial, so you should check whether any project may cover those, or whether the department or faculty's policy include covering the fees for high impact journals. About the time it takes, you can see this information in the journal description, or checking some papers and identifying the dates when the paper was submitted and when it was finally accepted. Be aware some journals do not want to "look bad" when substantial time has passed since submission to review, so they consider at some point that the paper was "resubmitted" and zero the count.
h-index is an indicator of academic performance. It refers to the maximum number of publications cited more than the h level, by the same author. For instance, if an author has 1 publication cited 1 time, the corresponding h=1. If the author has 20 publications of which 10 are cited more than 10 times and the rest are less than 10 times, then the h=10, regardless of the total number of papers (wikipedia has a very good explanation). As it was proposed by Jorge Hirsch, it is sometimes referred as Hirsch index. The index can also be used for research groups, universities or even journals. A somehow rule of thumb of proper academic performance in forest sciences is to expect a doctor to increase the h-index by at least one unit by year after graduation. Bear in mind that disciplines such as ecology result in higher number of citations that other disciplines such forest operations. In addition, publications with a large number of coauthors or based on reviews may increase the h-index despite the fact that perhaps the author's real contribution has been modest. In addition, there are unethical practices such as citation rings or citation cartels, where a research group consistently cites each other in other to increase the citation metrics. In my opinion this is an unfortunate consequence of the obsession for academic metrification made by universities and government funding offices.
Who must be coauthor of a manuscript? Is supervising enough a reason? Is giving data? Is it getting the funding? Clear criteria to decide who deserves to be coauthor in a paper has been discussed for long time. The latest convention on this refers to the Vancouver Group Recommendations [link]. The main conditions to become coauthor implies that the person should have provided: 1. Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND 2. Drafting the work or revising it critically for important intellectual content; AND 3. Final approval of the version to be published; AND 4. Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. On top of these, the group suggests that "in addition to being accountable for the parts of the work he or she has done, an author should be able to identify which co-authors are responsible for specific other parts of the work. In addition, authors should have confidence in the integrity of the contributions of their co-authors". In my experience, some organisations (including public institutions with public data) often demand that members of that institute must be listed as coauthor of studies in exchange of data. Even if this cooperation is somehow forced, these coauthors must contribute to the paper and must fulfill the criteria of the Vancouver group, and it is a logical demand, as using data from third parties by researchers inexperienced about the conditions the data was collected may result in fundamental errors. So it is always good to count with researchers involved in the datasets, subject to the condition they truly verify the data and the contents of the paper, as one more coauthor (which is not always the case).
Concerning the order of authors, there are different traditions. In economics, for instance, the tradition was to list coauthors alphabetically, and their papers are often a single author or just two. In forestry, however, this is not the case. Papers are more often the results of a cooperation of more than one author, and the first author takes the main credit. Therefore, the first author must have played a more relevant role in writing a designing the analysis. When comes to the rest, it is usually implied the last author is the most senior one, is the designer of the experiment, is the one who provided the overall financial basis or is the main supervisor with the largest experience on the topic. In evaluations, the last author gets a substantial credit of the paper, particularly in some countries (Sweden, Spain...). The rest of coauthors must be listed according to their overall contribution, being the second the second largest contribution, and the third and so forth. When there are many coauthors and the paper is the result of a large group cooperation (more than 5 or 6 coauthors) then there are no strict distinctions other than the first author (and the last one).
In many journals in forest sciences, about 75% of the submissions end up in rejection. Some journals praise significantly the novelty of the research, and penalize severely if they judge is not novel enough. Some others, just check the rigour and correctness of the paper (e.g. Plos one) and let the future readers to judge the novelty. The usual steps are: First the editor or associate editor has a quick glance at the paper, and quickly decides if fits the journal topics and the novelty and presentation is good enough. The first and faster rejection may come at this step. Second, the paper is sent for review to independent referees. The editor will make a decision based on these reports, and will most likely take the more critical one of them. If one of them suggest rejection, then most likely the paper will be rejected. A paper poorly presented or with fundamental flaws is often rejected directly by the editor. Other times, rejection is due to being out of the scope of the journal. A bad paper is and must be rejected. The problem is that sometimes, good paper are also rejected, because the reviewers simply do not understand the topic, or because of simply bad luck. In that case, you should consider an alternative journal, and incorporate all the comments from previous feedback, so the new version is stronger. You can even attach the comments from those previous reviewers and explain the new journal how you have addressed them. Keep in mind that, to reject a paper, one single large flaw is enough, and both reviewers would point to exactly the same thing. If you feel the paper was good, and the reviewers critizise several things and do not concord on the flaws, it can be a sign they simply did not fully understand it, and your efforts should be place to a better communication. And then be reminded of quotes such as:
"Louis Pasteur's theory of germs is ridiculous fiction". -Pierre Pachet, Professor of Physiology at Toulouse, 1872.
Remember to check the plagiarism guide [here]