Markopy
Utilizing Markov Models for brute forcing attacks
|
CUDA accelerated extension of Markov::API::ModelMatrix. More...
#include "cudaModelMatrix.h"
#include "cudarandom.h"
#include <curand_kernel.h>
#include <cuda.h>
#include <cuda_runtime.h>
#include <device_launch_parameters.h>
Go to the source code of this file.
Namespaces | |
Markov | |
Namespace for the markov-model related classes. Contains Model, Node and Edge classes. | |
Markov::API | |
Namespace for the MarkovPasswords API. | |
Markov::API::CUDA | |
Namespace for objects requiring CUDA libraries. | |
Functions | |
__global__ void | Markov::API::CUDA::FastRandomWalkCUDAKernel (unsigned long int n, int minLen, int maxLen, char *outputBuffer, char *matrixIndex, long int *totalEdgeWeights, long int *valueMatrix, char *edgeMatrix, int matrixSize, int memoryPerKernelGrid, unsigned long *seed) |
CUDA kernel for the FastRandomWalk operation. More... | |
__device__ char * | Markov::API::CUDA::strchr (char *p, char c, int s_len) |
srtchr implementation on device space More... | |
CUDA accelerated extension of Markov::API::ModelMatrix.
Extension of Markov::API::ModelMatrix which is modified to run on GPU devices. This implementation only supports Nvidia devices.
Class to flatten and reduce Markov::Model to a Matrix. Matrix level operations can be used for Generation events, with a significant performance optimization at the cost of O(N) memory complexity (O(1) memory space for slow mode)
To limit the maximum memory usage, each generation operation is partitioned into 50M chunks for allocation. Threads are sychronized and files are flushed every 50M operations.
Definition in file cudaModelMatrix.cu.