Since we late announced our $ 10001 Binary Battle to advance applications built on the Mendeley API ( now including PLoS every bit good ) . I decided to take a expression at the information to see what people have to work with. My analysis focused on our 2nd largest subject. Computer Science. Biological Sciences ( my subject ) is the largest. but I started with this one so that I could look at the information with fresh eyes. and besides because it’s got some truly cool documents to speak about. Here’s what I found:
What I found was a absorbing list of subjects. with many of the expected cardinal documents like Shannon’s Theory of Information and the Google paper. a strong screening from Mapreduce and machine acquisition. but besides some interesting intimations that augmented world may be going more of an existent world shortly.
The top graph summarizes the overall consequences of the analysis. This graph shows the Top 10 documents among those who have listed computing machine scientific discipline as their subject and take a subdiscipline. The bars are colored harmonizing to subdiscipline and the figure of readers is shown on the x-axis. The saloon graphs for each paper show the distribution of readership degrees among subdisciplines. 17 of the 21 CS subdisciplines are represented and the axis graduated tables and colour strategies remain changeless throughout. Click on any graph to research it in more item or to catch the natural information. ( NB: A minority of Computer Scientists have listed a subdiscipline. I would promote everyone to make so. )
1. Latent Dirichlet Allocation ( available full-text )
LDA is a agency of sorting objects. such as paperss. based on their implicit in subjects. I was surprised to see this paper as figure one alternatively of Shannon’s information theory paper ( # 7 ) or the paper depicting the construct that became Google ( # 3 ) . It turns out that involvement in this paper is really strong among those who list unreal intelligence as their subdiscipline. In fact. AI research workers contributed the bulk of readership to 6 out of the top 10 documents. Presumably. those interested in popular subjects such as machine larning list themselves under AI. which explains the strength of this subdiscipline. whereas documents like the Mapreduce one or the Google paper entreaty to a wide scope of subdisciplines. giving those documents a smaller Numberss spread across more subdisciplines. Professor Blei is besides a spot of a ace. so that didn’t injury. ( the sarcasm of a manually-categorized list with an LDA paper at the top has non escaped us )
2. MapReduce: Simplified Data Processing on Large Clusters ( available full-text )
It’s no surprise to see this in the Top 10 either. given the immense entreaty of this parallelization technique for interrupting down immense calculations into easy feasible and recombinable balls. The importance of the massive “Big Iron” supercomputer has been on the ebb for decennaries. The interesting thing about this paper is that had some of the lowest readership tonss of the top documents within a subdiscipline. but folks from across the full spectrum of computing machine scientific discipline are reading it. This is possibly expected for such a general intent technique. but given the above it’s strange that there are no AI readers of this paper at all.
3. The Anatomy of a large-scale hypertextual hunt engine ( available full-text )
In this paper. Google laminitiss Sergey Brin and Larry Page discourse how Google was created and how it ab initio worked. This is another paper that has high readership across a wide swath of subjects. including AI. but wasn’t dominated by any one subject. I would anticipate that the largest portion of readers have it in their library largely out of wonder instead than direct relevancy to their research. It’s a absorbing piece of history related to something that has now become portion of our every twenty-four hours lives.
4. Distinctive Image Features from Scale-Invariant Keypoints
This paper was new to me. although I’m certain it’s non new to many of you. This paper describes how to place objects in a picture watercourse without respect to how close or far off they are or how they’re oriented with regard to the camera. AI once more drove the popularity of this paper in big portion and to understand why. think “Augmented Reality“ . AR is the futuristic thought most familiar to the mean sci-fi partisan as Terminator-vision. Give the strong involvement in the subject. AR could be closer than we think. but we’ll likely utilize it to layer Groupon trades over stores we pass by alternatively of constructing unstoppable contending machines.
5. Reinforcement Learning: An Introduction ( available full-text )
This is another machine larning paper and its presence in the top 10 is chiefly due to AI. with a little part from folks naming nervous webs as their subject. most likely due to the paper being published in IEEE Transactions on Neural Networks. Reinforcement acquisition is basically a technique that borrows from biological science. where the behaviour of an intelligent agent is is controlled by the sum of positive stimulations. or support. it receives in an environment where there are many different interacting positive and negative stimulation. This is how we’ll teach the automatons behaviours in a human manner. before they rise up and destruct us.
6. Toward the following coevals of recommender systems: a study of the state-of-the-art and possible extensions ( available full-text )
Popular among AI and information retrieval research workers. this paper discusses recommendation algorithms and classifies them into collaborative. content-based. or intercrossed. While I wouldn’t name this paper a innovative event of the quality of the Shannon paper above. I can surely understand why it makes such a strong screening here. If you’re utilizing Mendeley. you’re utilizing both collaborative and content-based find methods!
7. A Mathematical Theory of Communication ( available full-text )
Now we’re back to more cardinal documents. I would truly hold expected this to be at least figure 3 or 4. but the strong screening by the AI subject for the machine larning documents in spots 1. 4. and 5 pushed it down. This paper discusses the theory of directing communications down a noisy channel and demonstrates a few cardinal technology parametric quantities. such as information. which is the scope of provinces of a given communicating. It’s one of the more cardinal documents of computing machine scientific discipline. establishing the field of information theory and enabling the development of the really tubes through which you received this web page you’re reading now. It’s besides the first topographic point the word “bit” . short for binary figure. is found in the published literature.
8. The Semantic Web ( available full-text )
In The Semantic Web. Tim Berners-Lee. Sir Tim. the discoverer of the World Wide Web. depict his vision for the web of the hereafter. Now. 10 old ages subsequently. it’s intriguing to look back though it and see on which points the web has delivered on its promise and how far off we still remain in so many others. This is different from the other documents above in that it’s a descriptive piece. non primary research as above. but still deserves it’s topographic point in the list and readership will merely turn as we get of all time closer to his vision.
9. Convex Optimization ( available full-text )
This is a really popular book on a widely used optimisation technique in signal processing. Convex optimisation attempts to happen the provably optimum solution to an optimisation job. as opposed to a nearby upper limit or lower limit. While this seems like a extremely specialised niche country. it’s of importance to machine acquisition and AI research workers. so it was able to draw in a nice readership on Mendeley. Professor Boyd has a really popular set of picture categories at Stanford on the topic. which likely gave this a small encouragement. every bit good. The point here is that print publications aren’t the lone manner of pass oning your thoughts. Videos of techniques at SciVee or JoVE or recorded talks ( antecedently ) can truly assist spread consciousness of your research.
10. Object acknowledgment from local scale-invariant characteristics ( available in full-text )
This is another paper on the same subject as paper # 4. and it’s by the same writer. Looking across subdisciplines as we did here. it’s non surprising to see two related documents. of involvement to the chief drive subject. look twice. Adding the readers from this paper to the # 4 paper would be adequate to set it in the # 2 topographic point. merely below the LDA paper.
Decisions So what’s the lesson of the narrative? Well. there are a few things to observe. First of all. it shows that Mendeley readership informations is good plenty to uncover both documents of long-standing importance every bit good as interesting approaching tendencies. Fun material can be done with this! How about a Mendeley leaderboard? You could catch the figure of readers for each paper published by members of your group. and have some friendly competition to see who can acquire the most readers. month-over-month. Comparing yourself against others in footings of readers per paper could set a large smiling on your face. or it could be a soft jog to acquire out to more conferences or possibly record a picture of your technique for JoVE or Khan Academy or merely Youtube.
Another thing to observe is that these consequences don’t needfully intend that AI research workers are the most influential research workers or the most legion. merely the best at being accounted for. To do certain you’re counted decently. be certain you list your subdiscipline on your profile. or if you can’t happen your exact one. pick the closest 1. like the machine larning folks did with the AI subdiscipline. We recognize that about everyone does interdisciplinary work these yearss. We’re working on a more flexible subject assignment system. but for now. merely pick your favourite one.
These stats were derived from the full readership history. so they do reflect a laminitis consequence to some grade. Restricting the analysis to the past 3 months would likely uncover different tendencies and comparing month-to-month alterations could uncover lifting stars.