this post was submitted on 26 Mar 2024
15 points (89.5% liked)

Programming

17313 readers
150 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 1 year ago
MODERATORS
 

Apologies if the title is confusing, but I couldn't think of better phrasing in short text.

Whenever we define / specify a certain automaton (such as a finite state machine or a turing machine) by defining all of its States, transition function, etc., this feels awfully similar to defining an algorithm. For example, I can define a machine that can tell if a number is divisible by 3. It is very similar to writing an algorithm and the steps to solving the problem.

Now I understand that the two aren't exactly equivalent. But would it be incorrect to say that the specification of a machine is a type of algorithm, since we're defining the steps it takes to solve a problem (how to respond to a specific state or input to solve a specific problem)?

you are viewing a single comment's thread
view the rest of the comments
[–] solrize@lemmy.world 0 points 7 months ago* (last edited 7 months ago)

Well, "algorithm" isn't a perfectly well defined term, and to be precise about randomness we sometimes say "randomized algorithm" if an algorithm uses an RNG. You can also extend the notion of Turing machine to include a randomization feature if you need that for something. I would say this whole issue is usually not important. It shows up mostly in some niche areas.

I would expect the CT thesis would gloss over the issue a little bit, and at the end of the day allow randomized computations as being realizeable.

Can I ask what book you are using? The ones I know of are now pretty old, I guess.

I don't know of instances where RNG's can make an uncomputable function computable. I think maybe that never happens. It is an open research question (called P=BPP) whether an RNG can allow solving a problem efficiently where the only non-randomized solutions are inefficient (i.e. a PRNG won't work, you need a real RNG).

The studies about computability and Turing machines etc. started in the 1920s, before there were computers, so the subject area was mathematical logic rather than the then-nonexistent "computer science". These days CS is mostly about specific algorithms, and even complexity theory (a niche area in theoretical CS, which in turn is a niche area in CS) is mostly about the computable, and stratifying computable functions into layers according to how many computations it takes to solve them.

If you're interested in the more "out there" questions of what can be computed at all, you might benefit from studying some logic (I mean the kind taught in math departments). Computability theory is imho mostly a topic in logic. It overlaps CS, but only somewhat. I think any theory-oriented programmer should study logic at least a little though, say enough to understand how predicate logic works. Unfortunately I don't know of a good book to suggest, but maybe someone else here does. There are good Wikipedia articles in the topic, but I find it useful to have something to read in linear order.