llama_ros: llama.cpp for ROS 2
Loading...
Searching...
No Matches
llama_ros.langchain.llama_ros.LlamaROS Class Reference
Inheritance diagram for llama_ros.langchain.llama_ros.LlamaROS:
Collaboration diagram for llama_ros.langchain.llama_ros.LlamaROS:

Public Member Functions

int get_num_tokens (self, str text)
 
- Public Member Functions inherited from llama_ros.langchain.llama_ros_common.LlamaROSCommon
Dict validate_environment (cls, Dict values)
 
None cancel (self)
 

Protected Member Functions

Dict[str, Any] _default_params (self)
 
str _llm_type (self)
 
str _call (self, str prompt, Optional[List[str]] stop=None, Optional[CallbackManagerForLLMRun] run_manager=None, **Any kwargs)
 
Iterator[GenerationChunk] _stream (self, str prompt, Optional[List[str]] stop=None, Optional[CallbackManagerForLLMRun] run_manager=None, **Any kwargs)
 
- Protected Member Functions inherited from llama_ros.langchain.llama_ros_common.LlamaROSCommon
GenerateResponse.Result _create_action_goal (self, str prompt, Optional[List[str]] stop=None, Optional[str] image_url=None, Optional[np.ndarray] image=None, Optional[str] tools_grammar=None, **kwargs)
 

Additional Inherited Members

- Static Public Attributes inherited from llama_ros.langchain.llama_ros_common.LlamaROSCommon
LlamaClientNode llama_client = None
 
CvBridge cv_bridge = CvBridge()
 
Metadata model_metadata = None
 
int n_prev = 64
 
int n_probs = 1
 
int min_keep = 0
 
bool ignore_eos = False
 
dict logit_bias = {}
 
float temp = 0.80
 
float dynatemp_range = 0.0
 
float dynatemp_exponent = 1.0
 
int top_k = 40
 
float top_p = 0.95
 
float min_p = 0.05
 
float xtc_probability = 0.0
 
float xtc_threshold = 0.1
 
float typical_p = 1.00
 
int penalty_last_n = 64
 
float penalty_repeat = 1.00
 
float penalty_freq = 0.00
 
float penalty_present = 0.00
 
float dry_multiplier = 0.0
 
float dry_base = 1.75
 
int dry_allowed_length = 2
 
int dry_penalty_last_n = -1
 
list dry_sequence_breakers = ["\\n", ":", '\\"', "*"]
 
int mirostat = 0
 
float mirostat_eta = 0.10
 
float mirostat_tau = 5.0
 
str samplers_sequence = "edkypmxt"
 
str grammar = ""
 
str grammar_schema = ""
 
list penalty_prompt_tokens = []
 
bool use_penalty_prompt_tokens = False
 

Member Function Documentation

◆ _call()

str llama_ros.langchain.llama_ros.LlamaROS._call ( self,
str prompt,
Optional[List[str]] stop = None,
Optional[CallbackManagerForLLMRun] run_manager = None,
**Any kwargs )
protected

◆ _default_params()

Dict[str, Any] llama_ros.langchain.llama_ros.LlamaROS._default_params ( self)
protected

◆ _llm_type()

str llama_ros.langchain.llama_ros.LlamaROS._llm_type ( self)
protected

◆ _stream()

Iterator[GenerationChunk] llama_ros.langchain.llama_ros.LlamaROS._stream ( self,
str prompt,
Optional[List[str]] stop = None,
Optional[CallbackManagerForLLMRun] run_manager = None,
**Any kwargs )
protected

◆ get_num_tokens()

int llama_ros.langchain.llama_ros.LlamaROS.get_num_tokens ( self,
str text )

The documentation for this class was generated from the following file: