LLMs can function not only as databases but also as dynamic, end-user programmable neural computers. Many view LLMs as compressed world knowledge databases delivering answers to user queries. LLMs have billions/trillions of weights learned during pretraining and fine-tuning. They are queried via natural-language conversations and have a next-token prediction function for diverse responses. The completion function allows probabilistic generation of multiple tokens for comprehensive responses. Sampling heuristics like greedy search, beam search, or random sampling provide interesting answers. Nondeterministic functions and big-step semantics notation explain LLMs computational behavior. The summary discusses the implementation of LLM algorithm, control using hyperparameters, and termination criteria based on stop tokens.