Some ideas

Concept of AI itself

I’ve glanced at many papers (knowing, of course, that I know very little of their jargon) and concluded that the recent statistical and mathematical analysis of AI has simply been overthought. Yet the theory of AI from the 70s and 80s delves to entirely conflicting perspectives of the driving force of AI in association with the morality and conscious factors of the human brain.

Think about the other organs of the body. They are certainly not simple, but after 150 years, we’ve almost figured them out, how they work mechanically and chemically. The challenge is how they work mathematically, and I believe that an attempt to determine an accurate mathematical representation of the human body would essentially lead to retracing its entire evolutionary history, up to the tiny imperfections of every person across each generation. Just as none of our hands are shaped the same, our brains most likely are structured uniquely, save for its general physical structure.

I conjecture that the brain must be built on some fundamental concept, but current researchers have not discovered it yet. It would be a beautiful conclusion, like the mass-energy equivalence that crossed Einstein’s mind when he was working in the patent office. It would be so fundamental that it would make AI ubiquitous and viable for all types of computers and architectures. And if this is not the case, then we will adapt our system architectures to the brain model to create compact, high-performing AI. The supercomputers would only have to be pulled out to simulate global-scale phenomena and creative development, such as software development, penetration testing, video production, and presidential-class political analysis and counsel.

Graph-based file system

Traditional file systems suffer from a tiny problem: their structure is inherently a top-down hierarchy, and data may only be organized using one set of categories. With the increasing complexity of operating systems, the organization of operating system files, kernel drivers, kernel libraries, user-mode shared libraries, user-mode applications, application resources, application configurations, application user data, caches, and per-user documents is becoming more and more troublesome to attain. The structure of POSIX, in the present, is “convenient enough” for current needs, but I resent the necessity to follow a standard method of organization when it introduces redundancy and the misapplication of symbolic links.

In fact, the use of symbolic links exacerbates this fundamental problem of these file systems: they work on a too low level, and they attempt to reorganize and deduplicate data, but simply increasing the complexity of the file system tree.

Instead, every node should be comprised of a metadata as well as data or a container linking to other nodes. Metadata may contain links to other metadata, or even nodes comprised solely of metadata encapsulated as regular data. A data-only node is, of course, a file, while a containerized node is a directory. The difference, however, is that in a graph-based file system, each node is uniquely identified by a number, rather than a string name (however, a string name in the metadata is to be used for human-readable listings, and a special identifier can be used as a link or locator of this node for other programs).

The interesting part about this concept is that it completely defeats the necessity of file paths. A definite, specific structure is no longer required to run programs. Imagine compiling a program, but without the hell of locating compiler libraries and headers because they have already been connected to the node where the compiler was installed.

The file system size could be virtually limitless, as one could define specifics such as bit widths and byte order upon the creation of the file system.

Even the kernel would base itself around the system, from boot. Upon mount, the root node is retrieved, linking to core system files and the rest of the operating system; package management to dodge conflicts between software wouldn’t be necessary, as everything is uniquely identified and can be flexibly organized to correctly define which applications require a specific version of a library.

In essence, it is a file system that abandons a tree structure and location by path, while encouraging references everywhere to a specific location of data.

Japanese visual novel using highly advanced AI (HAAI)

This would be an interesting first product for an aspiring AI company to show off its flagship “semi-sentient” AAI product. Players would be able to speak and interact with characters, with generated responses including synthesized voices. A basic virtual machine containing an English and Japanese switchable language core, a common sense core (simulating about ten years’ worth of real life mistakes and experiences), and an empathy core (with driver, to be able to output specific degrees of emotion) should be included in the game, which developers then parametrize and add quirks for each character, so that every character finishes with a unique AI VM image.

In fact, the technology showcased would be so successful that players would spend too much time enjoying the authentic human-like communication, getting to know the fictional characters too well, warranting the need to place a warning for players upon launching the game (like any health and safety sign) stating that “This game’s characters use highly advanced artificial intelligence. No matter how human-like these fictional characters interact, they are not human beings. Please take frequent breaks and talk to real, human people periodically, to prevent excessive attachment to the AI.”

Leave a Reply

Your email address will not be published. Required fields are marked *