Age, Biography and Wiki

Ray Solomonoff was an American computer scientist and inventor who is best known for his pioneering work in the field of artificial intelligence. He was born on July 25, 1926 in Brooklyn, New York. He attended the City College of New York, where he earned a bachelor's degree in mathematics in 1948. He then went on to earn a master's degree in mathematics from the University of Michigan in 1950. Solomonoff is credited with inventing the concept of algorithmic probability, which is the basis for modern machine learning. He also developed the Solomonoff-Levin universal prior, which is a mathematical formula used to calculate the probability of a given event. He was also a founding member of the Association for the Advancement of Artificial Intelligence. Solomonoff was a professor at the University of Illinois at Urbana-Champaign from 1965 to 1991. He was also a visiting professor at the University of California, Berkeley, and the University of Michigan. Solomonoff died on April 7, 2009, at the age of 83. He was survived by his wife, two sons, and two grandchildren.

Popular As N/A
Occupation N/A
Age 83 years old
Zodiac Sign Leo
Born 25 July, 1926
Birthday 25 July
Birthplace N/A
Date of death December 7, 2009
Died Place N/A
Nationality

We recommend you to check the complete list of Famous People born on 25 July. He is a member of famous researcher with the age 83 years old group.

Ray Solomonoff Height, Weight & Measurements

At 83 years old, Ray Solomonoff height not available right now. We will update Ray Solomonoff's Height, weight, Body Measurements, Eye Color, Hair Color, Shoe & Dress size soon as possible.

Physical Status
Height Not Available
Weight Not Available
Body Measurements Not Available
Eye Color Not Available
Hair Color Not Available

Dating & Relationship status

He is currently single. He is not dating anyone. We don't have much information about He's past relationship and any previous engaged. According to our Database, He has no children.

Family
Parents Not Available
Wife Not Available
Sibling Not Available
Children Not Available

Ray Solomonoff Net Worth

His net worth has been growing significantly in 2022-2023. So, how much is Ray Solomonoff worth at the age of 83 years old? Ray Solomonoff’s income source is mostly from being a successful researcher. He is from . We have estimated Ray Solomonoff's net worth , money, salary, income, and assets.

Net Worth in 2023 $1 Million - $5 Million
Salary in 2023 Under Review
Net Worth in 2022 Pending
Salary in 2022 Under Review
House Not Available
Cars Not Available
Source of Income researcher

Ray Solomonoff Social Network

Instagram
Linkedin
Twitter
Facebook
Wikipedia
Imdb

Timeline

2008

In Feb. 2008, he gave the keynote address at the Conference "Current Trends in the Theory and Application of Computer Science" (CTTACS), held at Notre Dame University in Lebanon. He followed this with a short series of lectures, and began research on new applications of Algorithmic Probability.

2006

In 2006 he spoke at AI@50, "Dartmouth Artificial Intelligence Conference: the Next Fifty Years" commemorating the fiftieth anniversary of the original Dartmouth summer study group. Solomonoff was one of five original participants to attend.

1999

A 1999 report, generalizes the Universal Distribution and associated convergence theorems to unordered sets of strings and a 2008 report, to unordered pairs of strings.

1997

In 1997, 2003 and 2006 he showed that incomputability and subjectivity are both necessary and desirable characteristics of any high performance induction system.

A description of Solomonoff's life and work prior to 1997 is in "The Discovery of Algorithmic Probability", Journal of Computer and System Sciences, Vol 55, No. 1, pp 73–88, August 1997. The paper, as well as most of the others mentioned here, are available on his website at the publications page.

1985

Throughout his career Solomonoff was concerned with the potential benefits and dangers of A.I., discussing it in many of his published reports. In 1985 he analyzed a likely evolution of A.I., giving a formula predicting when it would reach the "Infinity Point". This work is part of the history of thought about a possible technological singularity.

1984

About 1984, at an annual meeting of the American Association for Artificial Intelligence (AAAI), it was decided that probability was in no way relevant to A.I.

1970

In many of his papers he described how to search for solutions to problems and in the 1970s and early 1980s developed what he felt was the best way to update the machine.

In 1970 he formed his own one man company, Oxbridge Research, and continued his research there except for periods at other institutions such as MIT, University of Saarland in Germany and the Dalle Molle Institute for Artificial Intelligence in Lugano, Switzerland. In 2003 he was the first recipient of the Kolmogorov Award by The Computer Learning Research Center at the Royal Holloway, University of London, where he gave the inaugural Kolmogorov Lecture. Solomonoff was most recently a visiting professor at the CLRC.

1968

In 1968 he found a proof for the efficacy of Algorithmic Probability, but mainly because of lack of general interest at that time, did not publish it until 10 years later. In his report, he published the proof for the convergence theorem.

In the 1968 report he shows that Algorithmic Probability is complete; that is, if there is any describable regularity in a body of data, Algorithmic Probability will eventually discover that regularity, requiring a relatively small sample of that data. Algorithmic Probability is the only probability system known to be complete in this way. As a necessary consequence of its completeness it is incomputable. The incomputability is because some algorithms—a subset of those that are partially recursive—can never be evaluated fully because it would take too long. But these programs will at least be recognized as possible solutions. On the other hand, any computable system is incomplete. There will always be descriptions outside that system's search space, which will never be acknowledged or considered, even in an infinite amount of time. Computable prediction models hide this fact by ignoring such algorithms.

1965

In 1965, the Russian mathematician Kolmogorov independently published similar ideas. When he became aware of Solomonoff's work, he acknowledged Solomonoff, and for several years, Solomonoff's work was better known in the Soviet Union than in the Western World. The general consensus in the scientific community, however, was to associate this type of complexity with Kolmogorov, who was more concerned with randomness of a sequence. Algorithmic Probability and Universal (Solomonoff) Induction became associated with Solomonoff, who was focused on prediction — the extrapolation of a sequence.

1964

The probability is with reference to a particular universal Turing machine. Solomonoff showed and in 1964 proved that the choice of machine, while it could add a constant factor would not change the probability ratios very much. These probabilities are machine independent.

He enlarged his theory, publishing a number of reports leading up to the publications in 1964. The 1964 papers give a more detailed description of Algorithmic Probability, and Solomonoff Induction, presenting five different models, including the model popularly called the Universal Distribution.

1960

Solomonoff first described algorithmic probability in 1960, publishing the theorem that launched Kolmogorov complexity and algorithmic information theory. He first described these results at a conference at Caltech in 1960, and in a report, Feb. 1960, "A Preliminary Report on a General Theory of Inductive Inference." He clarified these ideas more fully in his 1964 publications, "A Formal Theory of Inductive Inference," Part I and Part II.

Generalizing the concept of probabilistic grammars led him to his discovery in 1960 of Algorithmic Probability and General Theory of Inductive Inference.

Prior to the 1960s, the usual method of calculating probability was based on frequency: taking the ratio of favorable results to the total number of trials. In his 1960 publication, and, more completely, in his 1964 publications, Solomonoff seriously revised this definition of probability. He called this new form of probability "Algorithmic Probability" and showed how to use it for prediction in his theory of inductive inference. As part of this work, he produced the philosophical foundation for the use of Bayes rule of causation for prediction.

The basic theorem of what was later called Kolmogorov Complexity was part of his General Theory. Writing in 1960, he begins: "Consider a very long sequence of symbols ... We shall consider such a sequence of symbols to be 'simple' and have a high a priori probability, if there exists a very brief description of this sequence – using, of course, some sort of stipulated description method. More exactly, if we use only the symbols 0 and 1 to express our description, we will assign the probability 2 to a sequence of symbols if its shortest possible binary description contains N digits."

Later in the same 1960 publication Solomonoff describes his extension of the single-shortest-code theory. This is Algorithmic Probability. He states: "It would seem that if there are several different methods of describing a sequence, each of these methods should be given some weight in determining the probability of that sequence." He then shows how this idea can be used to generate the universal a priori probability distribution and how it enables the use of Bayes rule in inductive inference. Inductive inference, by adding up the predictions of all models describing a particular sequence, using suitable weights based on the lengths of those models, gets the probability distribution for the extension of that sequence. This method of prediction has since become known as Solomonoff induction.

1956

He was one of the 10 attendees at the 1956 Dartmouth Summer Research Project on Artificial Intelligence. He wrote and circulated a report among the attendees: "An Inductive Inference Machine". It viewed machine learning as probabilistic, with an emphasis on the importance of training sequences, and on the use of parts of previous solutions to problems in constructing trial solutions for new problems. He published a version of his findings in 1957. These were the first papers to be written on probabilistic machine learning.

Other scientists who had been at the 1956 Dartmouth Summer Conference (such as Newell and Simon) were developing the branch of Artificial Intelligence that used machines governed by if-then rules, fact based. Solomonoff was developing the branch of Artificial Intelligence that focussed on probability and prediction; his specific view of A.I. described machines that were governed by the Algorithmic Probability distribution. The machine generates theories together with their associated probabilities, to solve problems, and as new problems and theories develop, updates the probability distribution on the theories.

1952

In 1952 he met Marvin Minsky, John McCarthy and others interested in machine intelligence. In 1956 Minsky and McCarthy and others organized the Dartmouth Summer Research Conference on Artificial Intelligence, where Solomonoff was one of the original 10 invitees—he, McCarthy, and Minsky were the only ones to stay all summer. It was for this group that Artificial Intelligence was first named as a science. Computers at the time could solve very specific mathematical problems, but not much else. Solomonoff wanted to pursue a bigger question, how to make machines more generally intelligent, and how computers could use probability for this purpose.

1950

He wrote three papers, two with Anatol Rapoport, in 1950–52, that are regarded as the earliest statistical analysis of networks.

In the late 1950s, he invented probabilistic languages and their associated grammars. A probabilistic language assigns a probability value to every possible string.

1942

From his earliest years he was motivated by the pure joy of mathematical discovery and by the desire to explore where no one had gone before. At age of 16, in 1942, he began to search for a general method to solve mathematical problems.

1926

Ray Solomonoff (July 25, 1926 – December 7, 2009) was the inventor of algorithmic probability, his General Theory of Inductive Inference (also known as Universal Inductive Inference), and was a founder of algorithmic information theory. He was an originator of the branch of artificial intelligence based on machine learning, prediction and probability. He circulated the first report on non-semantic machine learning in 1956.

Ray Solomonoff was born on July 25, 1926, in Cleveland, Ohio, son of Jewish Russian immigrants Phillip Julius and Sarah Mashman Solomonoff. He attended Glenville High School, graduating in 1944. In 1944 he joined the United States Navy as Instructor in Electronics. From 1947–1951 he attended the University of Chicago, studying under Professors such as Rudolf Carnap and Enrico Fermi, and graduated with an M.S. in Physics in 1951.