Differences between revisions 1 and 14 (spanning 13 versions)
Revision 1 as of 2017-02-08 18:42:13
Size: 615
Editor: DavidOwen
Comment: Added 2 papers about wisdom of the crowds
Revision 14 as of 2018-02-12 19:12:44
Size: 2905
Editor: DavidOwen
Comment:
Deletions are marked like this. Additions are marked like this.
Line 4: Line 4:
 * [[http://engineering.flipboard.com/2017/02/storyclustering|Clustering Similar Stories Using LDA]]: Good mash-up of ideas, including LDA (Latent Dirilecht Allocation), automatic dimensionality reduction, clustering.
 * [[https://openai.com/blog/adversarial-example-research/|Attacking machine learning with adversarial examples]]: Particular mention of image-classifying ANNs, which are especially prone to adversial noise that's imperceptible to humans.
 * [[https://hbr.org/2017/04/good-management-predicts-a-firms-success-better-than-it-rd-or-even-employee-skills|Good Management Predicts a Firm’s Success Better Than IT, R&D, or Even Employee Skills]]: An NBER study that appears to be done in R.
 * [[https://datawhatnow.com/simhash-question-deduplicatoin/|SimHash for question deduplication]]: Very easy intro to !SimHash. See also [[https://en.wikipedia.org/wiki/SimHash|the Wikipedia entry]].
 * [[https://www.autodeskresearch.com/publications/samestats|Same Stats, Different Graphs: Generating Datasets with Varied Appearance and Identical Statistics through Simulated Annealing]]
 * [[http://www.gasmodel.com/|Generalized Autoregressive Score models]]: a way to fit time-series with a variety of distributions
 * [[https://arxiv.org/abs/1705.03633|Inferring and Executing Programs for Visual Reasoning]]: ML programs generating other ML programs
 * [[https://arxiv.org/abs/1706.03741|Deep reinforcement learning from human preferences]]: Aims to minimize how much time a human must give feedback to the system for the system to train itself correctly
 * [[https://arxiv.org/abs/1708.00630|ProjectionNet: Learning Efficient On-Device Deep Networks Using Neural Projections]]: Trains a simpler ANN "next to" a more traditional ANN for image recognition, getting good results from the simpler ANN with reduced memory requirements.
 * [[https://www.theatlantic.com/business/archive/2012/05/when-correlation-is-not-causation-but-something-much-more-screwy/256918/|When Correlation Is Not Causation, But Something Much More Screwy]]
 * [[https://dmitryulyanov.github.io/deep_image_prior|Deep Image Prior]]
 * [[http://www.argmin.net/2018/01/25/optics/|Lessons from Optics, The Other Deep Learning]]: Phenomena noticed in training deep ANNs, with an analogy to optics.

Papers for discussion

Papers (last edited 2019-08-04 01:39:13 by DavidOwen)