All numeric constants, unless explicitly given a type, as in:
const x int = 5
are untyped.
const y = 5 // this is untyped
untyped constants do not exist at runtime -- they are a compile-time
phenomenon who's value can't be used until it is coalesced into a type.
const (
y = 5
z = 7.4
)
var (
a int = y
b byte = y
c uint64 = y
d float32 = z
e = y
f = z
)
No conversions occur anywhere in the above. Since y is untyped, a, b, and c
will have that value coalesced (at _compile_ time) into their respective
types. The same applies for d. When an untyped integer constant is used for
type inference (in the case of e), the inference will always be 'int'. For
floating-point untyped constants inference will always be float64 (so f is
a float64). It's like some interpretations of quantum physics: the particle
may have undefined properties until you force it to assume a given form
through observation. In this case, the property is its type, the particle
is an untyped constant (y or z), and the observational act is type
inference.
Also note that if you did something like:
g := int(int64(int(5)))
There will probably be no actual conversion going on, since 5 is a constant
within both the range of int64 and int, and so it's trivially resolvable at
compile time (it's a constant expression).
--