I have code that follows the general design of:
protocol DispatchType {}
class DispatchType1: DispatchType {}
class DispatchType2: DispatchType {}
func doBar<D:DispatchType>(value:D) {
print("general function called")
}
func doBar(value:DispatchType1) {
print("DispatchType1 called")
}
func doBar(value:DispatchType2) {
print("DispatchType2 called")
}
where in reality DispatchType
is actually a backend storage. The doBar
functions are optimized methods that depend on the correct storage type. Everything works fine if I do:
let d1 = DispatchType1()
let d2 = DispatchType2()
doBar(value: d1) // "DispatchType1 called"
doBar(value: d2) // "DispatchType2 called"
However, if I make a function that calls doBar
:
func test<D:DispatchType>(value:D) {
doBar(value: value)
}
and I try a similar calling pattern, I get:
test(value: d1) // "general function called"
test(value: d2) // "general function called"
This seems like something that Swift should be able to handle since it should be able to determine at compile time the type constraints. Just as a quick test, I also tried writing doBar
as:
func doBar<D:DispatchType>(value:D) where D:DispatchType1 {
print("DispatchType1 called")
}
func doBar<D:DispatchType>(value:D) where D:DispatchType2 {
print("DispatchType2 called")
}
but get the same results.
Any ideas if this is correct Swift behavior, and if so, a good way to get around this behavior?
Edit 1: Example of why I was trying to avoid using protocols. Suppose I have the code (greatly simplified from my actual code):
protocol Storage {
// ...
}
class Tensor<S:Storage> {
// ...
}
For the Tensor
class I have a base set of operations that can be performed on the Tensor
s. However, the operations themselves will change their behavior based on the storage. Currently I accomplish this with:
func dot<S:Storage>(_ lhs:Tensor<S>, _ rhs:Tensor<S>) -> Tensor<S> { ... }
While I can put these in the Tensor
class and use extensions:
extension Tensor where S:CBlasStorage {
func dot(_ tensor:Tensor<S>) -> Tensor<S> {
// ...
}
}
this has a few side effects which I don't like:
I think dot(lhs, rhs)
is preferable to lhs.dot(rhs)
. Convenience functions can be written to get around this, but that will create a huge explosion of code.
This will cause the Tensor
class to become monolithic. I really prefer having it contain the minimal amount of code necessary and expand its functionality by auxiliary functions.
Related to (2), this means that anyone who wants to add new functionality will have to touch the base class, which I consider bad design.
Edit 2: One alternative is that things work expected if you use constraints for everything:
func test<D:DispatchType>(value:D) where D:DispatchType1 {
doBar(value: value)
}
func test<D:DispatchType>(value:D) where D:DispatchType2 {
doBar(value: value)
}
will cause the correct doBar
to be called. This also isn't ideal, as it will cause a lot of extra code to be written, but at least lets me keep my current design.
Edit 3: I came across documentation showing the use of static
keyword with generics. This helps at least with point (1):
class Tensor<S:Storage> {
// ...
static func cos(_ tensor:Tensor<S>) -> Tensor<S> {
// ...
}
}
allows you to write:
let result = Tensor.cos(value)
and it supports operator overloading:
let result = value1 + value2
it does have the added verbosity of required Tensor
. This can made a little better with:
typealias T<S:Storage> = Tensor<S>
See Question&Answers more detail:
os