Skip to content

MachineLearningCurves is a collection of abstract papers, insights, and research notes focusing on various topics in machine learning.

License

Notifications You must be signed in to change notification settings

kalafus/MachineLearningCurves

Repository files navigation

MachineLearningCurves

MachineLearningCurves is a collection of abstract papers, insights, and research notes focusing on various topics in machine learning. This repository aims to provide both theoretical explorations and practical implementations to help researchers and developers better understand key concepts in deep learning, neural networks, and related fields.

Repository Contents

1. Abstract Papers

This section contains well-structured short papers covering different machine learning techniques, theories, and innovations. Each paper delves into a specific topic, offering a mix of conceptual insights, mathematical foundations, and potential applications.

Papers include:

  • Kalafus Initialization: A Refinement of Xavier Initialization for Improved Weight Scaling in Neural Networks: An improved approach to Xavier weight initialization by scaling based on the maximum of input and output sizes in lieu of the sum. Includes a glossary surveying documented randomized weight initialization distributions.
  • Novel Theory of the Mechanism of Non-Linearity in Activation Functions: A Comprehensive Theory for Neural Networks: While the non-linear nature of neural networks is widely recognized as being caused by non-linear activation functions, the precise mechanism by which these functions introduce non-linearity has not been explained. We propose a novel theory that the crucial non-linearity instilled in machine learning by activation functions arises from their disruption of associativity of tensor operations. Also contains a glossary surveying documented activation functions.
  • Data-Free Stochastic Attractors: A Whole New Paradigm in Neural Networks: Traditional neural networks rely on data-driven training, leading to deterministic attractors that make outputs predictable. This “white paper” introduces Data-Free Stochastic Attractors, a paradigm-shifting approach that removes the reliance on training data, focusing solely on metrics such as stochasticity and entropy. This method opens the door for applications in cryptography and obfuscation, offering a fundamentally different way of thinking about neural networks.

2. Research Notes

Ongoing research ideas, experiment results, and explorations into machine learning techniques. These notes document insights as they develop, serving as a log of findings and thoughts.

Notes include:

  • The Existential Paradox of AI and Encryption: A Moral Dilemma: This essay explores the paradox of releasing advanced encryption and AI tools through the lens of Robber Zhi and Taoist principles, raising essential existential questions about morality, identity, and conflict. It emphasizes responsible governance and stewardship, moving away from the false binary of control versus freedom, in alignment with Floridi's philosophy. Collective responsibility for ethical management is society’s intrinsic guide to the development of AI and encryption for the benefit of society.
  • Explain Behavioral Profiling Like I’m 5: Digital Profiling and Forecasting: Behavioral profiling and forecasting may seem like big, complicated ideas, but they’re easier to grasp if we compare them to watching animals. Just as someone who observes animals learns their usual behaviors—like how leopards hunt or how turtles hide—data collectors watch what we do online. Data brokers can figure out what “type” of person we are by seeing patterns in our behavior, even if they don’t know us personally. With enough information, data miners can make educated guesses about how we might act in the future. This can be helpful, like showing us things we may want to know, but it also raises important questions: why are data collectors observing us? What do they hope to gain from predicting how we’ll behave? These are things we all need to consider.

How to Use This Repository

For Researchers:

  • You are welcome to explore and build upon the ideas presented in the abstract papers.
  • If you use any of the ideas or code in your academic work, please provide proper attribution (see the Licensing section below).

For Developers:

  • Use the provided implementations to integrate novel machine learning concepts into your projects.
  • Fork the repository and experiment with the ideas. Contributions are always welcome!

Contribution Guidelines

  • Contributions are welcome! Please open a pull request or submit an issue if you have suggestions, corrections, or new ideas.
  • If you'd like to contribute new abstract papers or code, please ensure that the content aligns with the repository’s focus on machine learning research.

Licensing

  • Code is licensed under the GPLv3 License: You are free to use, modify, and distribute the code, provided that any derivative works are also open-sourced under the same license.
  • Abstract Papers and Documentation are licensed under CC BY-ND (Creative Commons Attribution-NoDerivs): You may share and distribute the papers, but you cannot modify or remix them, and proper attribution must be given.

Acknowledgments

This repository is maintained by James J Kalafus and aims to provide a central hub for ongoing machine learning research.

Feel free to reach out with questions or feedback!