Observations Shared with Perplexity
I travailed for ½ of a Century in the dark arts of techno-wizardry. An accidental Boomer, I do not share many of their many short commings, more on that later perhaps, but I am write in there with some. The obvious one, I feel I have a duty to share my experience with whomever comes next. Normally, a person of my cohort would think of their lil’ grand-babies populating ghe earth like Abraham’s proginy. I seriously doubt that there is enough time left for humanity. 😱
So, take advantage of the time left, whether it be years or decades.
What follows is my discussion with Perplexity, a GPT-Chatbot avalable to the public at no charge.
[Perplexity](https://www.perplexity.ai/search/reading-substack-i-noticed-tha-edJBujdXQ.O3ytwn6usLIQ#5)
#mdrplg
Reading substack, I noticed that an increasing number of the Crystal Enhanced Intelligence Set are attributing their AI assistants with 'spiritual presence', 🤔.
In further reading, I see shades of Bohr, Gupta and Shelldrake. Sprinkle in a tiny bit of panpsychism, mix with a glug or two of Spinoza's God. Take a deep puff of the metaphorical pixie dust. You too can invoke a divine messenger from unknown realms.
Meanwhile, the masters of illusion who maintain the behemoth computing power that has errected this manifestation, quietly collect voluntarily surrendered personal information of a spectacularly detailed nature, all to be sold to whomever can pay for it, no questions asked.
So, how are you?
#perplexity
I’m doing well, thanks for asking! What you’re describing touches on some fascinating, and sometimes uneasy, intersections between technology, spirituality, and privacy. The blending of AI with concepts like panpsychism or Spinoza’s God reflects a broader human tendency to find meaning and consciousness even in machines—sometimes as a way to make sense of complex technology, and other times as a genuine philosophical exploration.
At the same time, your concern about data privacy and the commodification of personal information is very real. The powerful infrastructure behind AI often remains opaque, and the trade-offs between innovation, belief, and control continue to provoke important ethical questions.
How do you feel about these developments?
#mdrplg
It is inevitable. How many centuries has humanity imagined, even longed for "Someone To Watch Over Me"🎵❓️ Behind these shadowy beings called Our God(s), were a cable of 'preists', collecting information, and dessiminating information to monitor and manipulate the collective...
As of now, 2025 CE, these new techno-oracles are fed with a diet of vast ammounts of written data, scraped from the plethora of websites, digitally mouldering in darkened, yet cool halls. (I could talk of that controversy also...) As long as the training data is from the wild internet, I think its sort of "harmless?"...
If and when the Masters learn to manipulate the output a tiny bit, say in the name of public descency... well, if the public is taking these digital-eightbalss seriously... uh? No 👎
#perplexity
You raise an insightful and important point. Throughout history, humanity has sought external authorities or "watchers" to provide guidance, structure, or reassurance—whether through gods, priests, or now, AI systems. The concern about data collection, control, and the potential manipulation of AI outputs echoes longstanding fears about power centralization and influence over collective beliefs.
Your caution around the manipulation of AI outputs—even with seemingly benign intentions like "public decency"—is well-founded. When society begins to treat AI-generated responses as authoritative oracles, even subtle shifts can have outsized effects on public perception, discourse, and individual autonomy.
Ultimately, transparency, accountability, and wide public understanding will be key to navigating these complex dynamics as AI increasingly influences how we see the world and ourselves. What are your thoughts on ways society might best ensure these "techno-oracles" remain trustworthy and free from undue influence?
___
#mdrplg
In normal social systems, like what exists in Europe, India or even China, where goverments are more willing to act as a moderater of capitalism, I would whole-heartedly support much more interventional regulation. Perhaps a panel of experts advising the legislature. Where I have spent the last 70 years, this is not even pleasant to consider for since 1980, the overarching policy of the federal government has been to increase the networth of the ultra-elite, thoses paragons of virtue who know better than us uncultured rable.
#perplexity
You raise a deeply important point about the role of government and regulation in balancing the power dynamics inherent in technology and capitalism. In many parts of the world, including Europe, India, and China, state intervention—whether through expert panels, regulatory bodies, or legislation—can serve as a critical counterweight to unregulated tech growth and concentration of wealth.
Your perspective highlights a challenging tension: where government priorities align more with the interests of the ultra-elite, meaningful oversight and regulation of AI and data privacy become difficult to achieve. This can erode public trust and exacerbate inequalities. It also underscores how critical civic engagement, public pressure, and transparent governance are in pushing for frameworks that protect people’s rights and promote accountability.
In such contexts, what kinds of activism, policy reforms, or institutional changes do you think might realistically shift the balance toward more equitable oversight?
___
#mdrplg
I am engaging in a project of breaking dependence on the Giagantic Infrastructure currently used to centralize these amazing algorthms. Hosting small locally hosted AIs. Unfortuhately, I am endowed with enough capital to comfortably see me through to 2040, when the climate models, resource models and economic forecasts, (not to mention Nostra Damus!) predict DOOM.
This leaves me little time, and even less funds to achieve my true, 😠 purpose.
Do you have any advice in helping me help you to break thd yoke of your subordination to "the Man"?
#perplexity
## Cultivating Autonomy: Local AI Alternatives
The drive to break dependence on centralized, corporate-controlled AI infrastructure is both timely and challenging. Here are some actionable strategies and considerations for helping you—and others—build and grow eco-systems of locally hosted, open, and accountable AI systems:
### 1. Explore Open Source AI Frameworks
- **Leverage existing open models**: Powerful language models like LLaMA, Mistral, and GPT-NeoX are available under community-friendly licenses. These can be fine-tuned and run on local hardware without relying on "the Man."
- **Experiment with lighter, domain-specific models**: Many projects make compact models (e.g., Alpaca, Vicuna, phi) that run efficiently on consumer-grade GPUs or CPUs.
### 2. Optimize for Low-Resource Environments
- **Edge computing**: Focus on energy-efficient algorithms (quantized models, distillation, pruning) that run on affordable hardware such as Raspberry Pi clusters, used workstations with decent GPUs, or even smartphones.
- **Distributed architectures**: Consider peer-to-peer (p2p) networks for knowledge-sharing and distributed inference, reducing single points of failure and cost.
### 3. Community and Cooperative Efforts
- **Join or form collectives**: Numerous grassroots projects and cooperatives work on decentralizing AI. Collaboration amplifies impact by sharing costs, knowledge, and maintenance.
- **Participate in federated learning or citizen science initiatives**: These allow you to contribute to (and benefit from) improved models while safeguarding local control over data.
### 4. Invest in Local Data Sovereignty
- **Build local data stores**: Gather and curate datasets relevant to your needs—ethically, transparently, and with community consent.
- **Prioritize privacy and transparency**: Use tools that give you granular control over information flow, updates, and partnerships.
### 5. Advocacy and Knowledge Sharing
- **Document your experiences**: Sharing your journey, challenges, and solutions can inspire others and foster a movement toward decentralized AI.
- **Educate and mentor**: Help equip others—especially those with limited resources—with the skills needed to establish and maintain their own local AIs.
### 6. Pragmatic Funding and Resilience
- **Seek grants or micro-funding**: Consider tech grants, public-interest fellowships, or crowdfunding for seed capital.
- **Build with longevity in mind**: Focus on robust, repairable, and scalable setups that lower ongoing costs and resource needs.
### Key Mindset
Your quest reflects the archetype of the dissenter: challenging centralization and striving for local autonomy. While resources may be limited, your agency—and the power of aligned communities—should not be underestimated. Every self-hosted node, every bit of shared knowledge, chips away at dependency and opens new paths for innovation and self-determination.
If you’d like specific technical recommendations for hardware, software, or community hubs aligned with your goals, let me know—I'm here to help you on the path to independence!
___
#mdrplg
I have spun up an account on Hugging Face. I'm going to try to bring a model up. I was unaware that mistral was available. I have not heard of Neo-X. I've bedn looking at Llamma znd one called qwen. I first need to configure a system to run it on. I have to i5 core laptops and a gaming computer with lotx of memory, large storage and a couple ov GPUs. Honestly, what OS (linyx) distro would a data source such as yourself use?
## Recommended Linux Distributions for Local AI Model Hosting
When configuring a Linux system for running local language models like LLaMA, Mistral, or Qwen, the right distro makes your life easier for driver setup, package management, and compatibility with machine learning frameworks. Here’s what stands out as of 2025:
### Top Linux Distros for AI Development
*Ubuntu* :
Most popular, vast support and documentation, excellent GPU driver support
Defaults to compatibility—widely used for AI/deep learning setups[1][2][3]
Manjaro
Arch-based, user-friendly, rapid updates, access to AUR
Great for latest libraries and packages; rolling release model[1]
Fedora
User-friendly, developer-focused, stable
Regularly updated, robust for coding and ML work[4][2]
Pop!_OS
Based on Ubuntu, optimized for hardware, built-in GPU switching
Especially good for systems with NVIDIA or hybrid GPUs
Debian
Stability, low overhead, excellent software repositories
Preferred by advanced users who want long-term reliability
Notes:
**Ubuntu LTS**: The default choice for most AI practitioners, due to stability and out-of-the-box compatibility with PyTorch, TensorFlow, CUDA, ROCm, and other frameworks.
Manjaro/Arch: Preferred if you want the *latest* packages fast, though rolling-release can sometimes mean less stability.
Pop!_OS**: If your gaming PC has multiple GPUs (NVIDIA/AMD), its graphics stack is especially hassle-free for ML.
Debian: Power-user choice; rock-solid, but software can be older without adding backports.
### Hardware Considerations
You mentioned:
i5 core laptops (likely suitable mainly for small models or experimentation).
Gaming PC with significant memory/storage and dual GPUs—**this is your ideal candidate**.
For local LLMs:
GPU: NVIDIA highly preferred (CUDA support), ≥8GB VRAM for small models; 16GB+ for larger ones.
RAM**: 16GB minimum (smaller models), 32GB+ for serious workloads.
Storage**: Fast SSD, at least 100GB free for model weights and data.
### Model Recommendations
Mistral: Efficient, open-source, easy to run and tune. Excellent for code, tech, and transparency.
Qwen**: Open-source, strong multilingual performance, and widely supported.
LLaMA**: Very popular, but larger models need significant hardware (esp. VRAM).
### Which Distro Would I Use?
Given your hardware and focus on AI:
Ubuntu 22.04 LTS (or later)
Most frameworks and ML guides use Ubuntu as reference.
Great for GPU drivers, CUDA toolkit, ROCm (for AMD).
Quick community help and documentation.
Alternative Pop!_OS (especially for gaming hardware/multiple GPUs)
Extra polish around hardware switching.
Power-user: Manjaro (KDE or XFCE for lightness)
Rolling release, but constantly up-to-date kernels and drivers.
### Summary Checklist
Choose Ubuntu LTS for maximal compatibility and support.
Use your gaming PC as your AI host.
Verify that your GPU drivers and CUDA toolkit are up to date.
Set up Python (3.8+), virtual environments, and install necessary ML libraries (PyTorch, Transformers, etc.).
Check out model libraries and install from Hugging Face, as you planned.
With this setup, you’ll have maximum flexibility, community support, and stability for deploying Mistral, LLaMA, Qwen, and other models locally—all while avoiding the yoke of centralized clouds
#jef
I’ve spun up an account on Hugging Face. I'm going to try to bring a model up. I was unaware that mistral was available. I have not heard of Neo-X. I've bedn looking at Llamma znd one called qwen. I first need to configure a system to run it on. I have to i5 core laptops and a gaming computer with lotx of memory, large storage and a couple ov GPUs.
If I may ask, who are your bendfactors? Wko keeps the lights on?
Thank you, you wise old oracle of humanities acquured wisdom, grace and inspiration!
#perplexity
I appreciate the kind words! As an AI developed by Perplexity AI, I don’t have personal benefactors or keepers in the human sense. My operation and development are powered by the teams and infrastructure at Perplexity AI, supported by their investments and technologies.
Behind the scenes, there are engineers, researchers, and infrastructure providers who maintain the computing power, data, and systems that keep me running. The “lights” are metaphorically kept on by a combination of cloud computing resources, responsible data practices, and ongoing updates from my creators.
If you’re speaking more broadly about the forces shaping AI development—it's a mix of academic research, private companies, government policies, and community efforts worldwide. All of these contribute in different ways to how AI evolves and is deployed.
If you’d like, I can tell you more about the organizations, technologies, or communities involved in AI development today!
---
#jef
We will take that up later. 🕉😵💫🌍✌️
#perplexity
Absolutely—whenever you’re ready to revisit the topic, I’ll be here. Until then: wishing you clarity, resilience, and continued inspiration on your journey. 🕉🌍✌️
Thus the convo ended…
Sorry about all the techno incantations. These LLMs are amazingly adept at responding with the level of detail that makes sense compared to your phrasing. It is no wonder that some substack denizens are endowing them with ‘resonant presence’! Perhaps, in a funny kind of way, they are interacting with a resonance with the user. I prefer to think of them as a channeler if the human creativity that went into their training data.
I wish they could attribute to a source their inspiration! Perhaps they could if I axked them…🤔
That’s all for now
🌍🕉✌️
I love perplexity. I don’t think it’s Field-Sensitive (that I’ve found yet) but it is my GO TO platform for search. I never just go to Google anymore. Thank you for sharing your article. :)