Commit Graph

54 Commits

Author SHA1 Message Date
1a90f9d3a6 Cuda implementation for copying data around. 2023-06-23 11:18:29 +01:00
79e4b29c2f Add the reshape method and operation (without grad for now). 2023-06-23 10:51:05 +01:00
c4c6167949 Add the continuous method. 2023-06-23 10:45:20 +01:00
4712dcc2f6 Actually copy the data around in cat (cpu only). 2023-06-23 10:24:02 +01:00
6110db31c9 Add the cat operator (without the storage implementation for now). 2023-06-23 10:13:37 +01:00
bf9e1d1c23 Add the detach method. 2023-06-23 09:19:23 +01:00
3e7cb18d7f Handle tensor transfers between devices in the backprop. 2023-06-23 08:55:34 +01:00
3f79d81b6f Add transposition around arbitrary axis. 2023-06-23 08:51:13 +01:00
27d428af1a Add the backward pass for transpose. 2023-06-23 08:43:05 +01:00
3b550a56dc Transfer tensors between devices. 2023-06-23 08:35:22 +01:00
fc41ccb5bb Add the copy method. 2023-06-23 08:12:52 +01:00
552276749a Only keep track of the graph when needed. 2023-06-22 22:06:24 +01:00
fc83d97b41 Only support the contiguous case for cublas matmul. 2023-06-22 21:39:37 +01:00
7d9a8ff3f9 Do not ignore errors when cloning the storage. 2023-06-22 16:29:18 +01:00
2f7a072250 Rename as_slice to storage_data and implement the cuda version. 2023-06-22 16:00:22 +01:00
836ad5f76c Remove one level of indirection for the binary and unary ops. 2023-06-22 15:20:51 +01:00
625e08d6ab Abstract the implementation of Shape. 2023-06-22 12:39:15 +01:00
f052020ba2 Support cuda in to_vec3. 2023-06-22 12:22:51 +01:00
77712d4348 Addressing comments. 2023-06-22 13:13:35 +02:00
449af49b54 Adding size checking when creating a tensor from buffer + shape. 2023-06-22 13:08:57 +02:00
a8b6c848e0 Final updates. 2023-06-22 12:39:33 +02:00
04cf14f35a Moving to gemm and adding matmul backprop.
- Tentative `T` operator.
2023-06-22 12:37:02 +02:00
86e4cbbc3d Adding matmul 2023-06-22 12:25:58 +02:00
ce977b489e Adding matmul? 2023-06-22 12:25:58 +02:00
87a37b3bf3 Retrieve data from the gpu. 2023-06-22 11:01:49 +01:00
7c46de9584 Check that the tensor is contiguous before applying the kernel. 2023-06-21 21:28:59 +01:00
fcb4e6b84f Use a reference for the device. 2023-06-21 19:55:57 +01:00
71735c7a02 Move the data between the host and the device. 2023-06-21 19:43:25 +01:00
c654ecdb16 Add a specific example for cuda. 2023-06-21 18:56:04 +01:00
7adffafeda Abstract the gradient storage. 2023-06-21 14:29:48 +01:00
8cde0c5478 Add some skeleton code for GPU support. 2023-06-21 09:13:57 +01:00
f319583530 More QOL changes, binary op for constants. 2023-06-21 08:59:08 +01:00
0839954770 Add some binary ops. 2023-06-21 08:32:35 +01:00
3a5405ca6d Move the StridedIndex in its own module. 2023-06-21 07:44:36 +01:00
78bac0ed32 Add a couple operators. 2023-06-20 22:32:11 +01:00
f1f372b13e Add the affine transformation. 2023-06-20 21:51:35 +01:00
a419a9da72 Add some backprop test. 2023-06-20 20:54:35 +01:00
c4c303b6f1 Add some very basic backprop. 2023-06-20 20:33:44 +01:00
3b7984ccce Add some functions to create variables. 2023-06-20 19:31:35 +01:00
9ff8d2076a Add the topological sort for backprop. 2023-06-20 19:15:39 +01:00
671bcf060e Expose the tensor ids. 2023-06-20 14:22:04 +01:00
d922ff97f2 Add some unique identifier. 2023-06-20 13:00:04 +01:00
d9cb1917ce Add some unary ops. 2023-06-20 12:04:01 +01:00
6c5fc767a8 Add the slice indexing. 2023-06-20 10:50:58 +01:00
786544292d Add more to the binary operators. 2023-06-20 09:49:40 +01:00
7a31ba93e4 Start adding some ops. 2023-06-20 08:41:19 +01:00
ef6760117f Proper stride initialization. 2023-06-20 07:53:53 +01:00
bcae61b7f2 Cosmetic changes. 2023-06-19 21:30:03 +01:00
26d6288eb6 Add an easy way to create tensor objects. 2023-06-19 20:59:26 +01:00
01eeb0e72f Shuffle the shape bits around. 2023-06-19 20:22:12 +01:00