If you want to make money online : Register now

Identification and the definition of data type

, , No Comments
Problem Detail: 

A frequently cited definition of data type is that it is a set of values and a set of operations of those values.

I've never read any mention along the lines of a type must be capable of being identified, although the programming languages I've encountered use names to distinguish different types.

  • Is the use of identifiers then just a convenience for the programmer? In principle, couldn't you just replace the identifier with the type definition each time it was needed?
  • Are there any programming languages that don't use named types?
  • And, if so, what implications, if any, are there of this? (One possibility I can think of is that it means there is no nominal typing.)
Asked By : Steven Maude
Answered By : jmite

Is the use of identifiers then just a convenience for the programmer? In principle, couldn't you just replace the identifier with the type definition each time it was needed?

Yes. You can have a strongly typed language that completely infers types, where the program at no point refers to the types, at all, ever.

For example, you could write programs in Hindley Milner (without the gamut of extensions you see in something like Haskell) without ever referring to a type. You shouldn't, but you could.

Types and values are usually separate in non-dependently typed programs, and if your system allows for full type reconstruction, you don't need to refer to them in your code.

Are there any programming languages that don't use named types?

Doubtful. When you use an integer but the program expects a boolean, the compiler has to tell you something. So it's going to want to give a name to those types.

Without primitive types, you can't do much useful in a programming language, so you're going to want to at least give your primitive types names.

Also, having all sum-types be anonymous would be extremely tedious. You could refer to the List type as T . T = Nil + Cons T, and occasionally you will see something like this on paper, but this is going to get hard to use very fast. As you can see, recursive types are particularly awkward in this scheme, and you usually end up needing the type-level equivalent of the $Y$-combinator.

Now, if you use things like Church encoding, you can get away without having primitive types, where everything is probably going to be higher-rank polymorphic types. For example, you can encode natural numbers as forall a . (a -> a) -> a -> a. But again, this is something more useful on paper than in the hands of a programmer.

And, if so, what implications, if any, are there of this?

None that I can think of, other than the lack of nominal typing, and you could even fake nominal typing by attaching some singleton type to your values.

For example, ((), forall a . (a -> a) -> a -> a) is isomorphic to forall a . (a -> a) -> a -> a, but now you have two isomorphic types the system considers distinct.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/60880

3200 people like this

 Download Related Notes/Documents

0 comments:

Post a Comment

Let us know your responses and feedback