Differences between revisions 3 and 19 (spanning 16 versions)
Revision 3 as of 2017-02-18 03:11:10
Size: 1074
Editor: DavidOwen
Comment: Noise attacks against ANNs
Revision 19 as of 2019-08-04 01:39:13
Size: 3774
Editor: DavidOwen
Comment:
Deletions are marked like this. Additions are marked like this.
Line 6: Line 6:
 * [[https://hbr.org/2017/04/good-management-predicts-a-firms-success-better-than-it-rd-or-even-employee-skills|Good Management Predicts a Firm’s Success Better Than IT, R&D, or Even Employee Skills]]: An NBER study that appears to be done in R.
 * [[https://datawhatnow.com/simhash-question-deduplicatoin/|SimHash for question deduplication]]: Very easy intro to !SimHash. See also [[https://en.wikipedia.org/wiki/SimHash|the Wikipedia entry]].
 * [[https://www.autodeskresearch.com/publications/samestats|Same Stats, Different Graphs: Generating Datasets with Varied Appearance and Identical Statistics through Simulated Annealing]]
 * [[http://www.gasmodel.com/|Generalized Autoregressive Score models]]: a way to fit time-series with a variety of distributions
 * [[https://arxiv.org/abs/1705.03633|Inferring and Executing Programs for Visual Reasoning]]: ML programs generating other ML programs
 * [[https://arxiv.org/abs/1706.03741|Deep reinforcement learning from human preferences]]: Aims to minimize how much time a human must give feedback to the system for the system to train itself correctly
 * [[https://arxiv.org/abs/1708.00630|ProjectionNet: Learning Efficient On-Device Deep Networks Using Neural Projections]]: Trains a simpler ANN "next to" a more traditional ANN for image recognition, getting good results from the simpler ANN with reduced memory requirements.
 * [[https://www.theatlantic.com/business/archive/2012/05/when-correlation-is-not-causation-but-something-much-more-screwy/256918/|When Correlation Is Not Causation, But Something Much More Screwy]]
 * [[https://dmitryulyanov.github.io/deep_image_prior|Deep Image Prior]]
 * [[http://www.argmin.net/2018/01/25/optics/|Lessons from Optics, The Other Deep Learning]]: Phenomena noticed in training deep ANNs, with an analogy to optics.
 * [[http://ai2.ethz.ch/|AI2 (Abstract Interpretation for AI Safety)]]: Using absint to guard against adversarial attacks.
 * [[https://arxiv.org/abs/1806.04743|INFERNO: Inference-Aware Neural Optimisation]]
 * [[https://thomas-tanay.github.io/post--L2-regularization/|A New Angle on L2 Regularization]]
 * [[https://arxiv.org/abs/1908.00200|KiloGrams: Very Large N-Grams for Malware Classification]]: Clever use of multi-passing over a large dataset with an approximating first pass, to reduce computational and memory requirements.
 * [[http://www.sciencesuccess.org/uploads/1/5/5/4/15543620/science_quantifying_aaf5239_sinatra.pdf|Quantifying the evolution of individual scientific impact]]: Digs into the distributions found in the data to produce a really excellent model; somewhat contrary to anything we'd currently expect from machine-learning approaches.

Papers for discussion

Papers (last edited 2019-08-04 01:39:13 by DavidOwen)