This section is under development...

Q: Isn't AI always "just a tool"?

A: No! Almost all of the time, yes, AI is just as a tool. Of course, no one seriously thinks something like a calculator or a spreadsheet is an inventor.

However, at least some of the time, the act that qualifies a natural person to be an inventor—for instance, "conception" of an invention in the U.S. or "devising" and invention in the U.K.—is functionally automated by a machine. Further, some of the time there isn't a natural person who would traditionally qualify as an inventor. In those cases, we argue the machine is not "just a tool", it is automating invention.

Q: How could an AI own its own patents?

A: We are not advocating for an AI to own its own patents. We are advocating for the AI's owner to own patents on any AI-generated inventions. AI does not have legal personality and cannot own property.

Q: How were the inventions in the patent applications generated by DABUS?

DABUS stands for “Device for the Autonomous Bootstrapping of Unified Sentience.” The innovations described in the patent applications are the products of an extensive artificial neural system that combines the memories of various learned elements into potential inventions that are then evaluated through the equivalent of affective responses. Such responses then either: (1) trigger synaptic noise that serves to generate new juxtapositional concepts, or (2) nullify synaptic noise to reinforce those notions fulfilling some purpose or goal.

In the case of the invention for the "neural flame" device, the inventive aspects of a light-emitting element, flashing at the prescribed frequency and fractal dimension, possibly integrated with a traditional candle base or altar piece, provide the practical benefits of an emergency signal beacon or human attention-getter.

The inventions were conceived by a generative machine intelligence, judging merit of its own self-conceived ideas based upon its own cumulative experience. Nevertheless, the system did autonomously choose to selectively reinforce the combination of numerous elements into more complex notions. As discussed further below, the inventions were conceived as various semantic spaces represented in multiple neural network-based associative memories synaptically bonded to one another, along with a neural network-generated image of the notions.

In response, other neural modules chained their memories to predict the favorable consequences of the fleeting ideas, which were then reinforced into a more permanent and significant memory during Eureka moments.

DABUS employs an array of hundreds of neural modules, each carrying out sequential associations of words related to a given topic. Given the input of a properly coded word to the module, an associated word is activated. Run recurrently, this neural net exhaustively searches through the linguistic space for all learned concepts related to the input word, duplicating the process of circular definition, whereby humans claim to define anything, when in objective reality they do not. Instead, they generate associative loops that are meaningful only when they incorporate strongly habituated concepts.

The primary advantage of using neural modules rather than lookup tables, is that application of an appropriate word activates an entire linguistic subspace, as many potential variations on a given theme sequentially activate, thus generating a Byzantine chain of linguistic/semantic associations, a “gestalt”, whose meaning rises to more than just a given word. Additional neural modules contain imagery associated with these various linguistic subspaces. To establish their relationship with corresponding linguistic modules, relevant visual images are simultaneously presented to the assembly, thus cumulatively binding language and imagery through Hebbian learning.

Linguistic and visual modules are grouped into a “synthetic cortex.” Each module is an auto-associative neural net trained to synaptically bind interrelated words with one another. With all nets subjected to a given noisy input pattern, one module typically responds with the memory of some word association, thus producing a meaningful output pattern that is broadcast to all other modules within the synthetic cortex, with yet another module typically resonating and generating another associated term, and so on.

In developing the inventions, related linguistic modules have connected into a preliminary notion that has subsequently enlisted related semantic modules. Similarly, the combination of the related modules triggered the hybridization of imagery absorbed within neural modules containing the memories of the associated concept hives.

Additional neural module cumulatively learned the connectivity of simultaneously resonating linguistic and visual modules as concepts and their learned consequences have been presented to the entire assembly during mentoring sessions. Thus, if a specific term is presented to the entire assembly, not only will the associated module resonate, but also related modules and images.

As an aside, the connectivity net is constantly infiltrated with noise to simulate the diffusion of stress neurotransmitters such as cortical adrenaline. Activation of other modules through associative chaining triggers reduction in noise levels within this net that simulates the release of serotonin, which in neurobiology would tend to neutralize adrenaline, relaxing any synaptic chaos, and promoting the learning that binds these modules into concepts and their consequences.