Developer API

This is the complete API reference for Pedal and its associated components. If you are an instructor, you might find it more helpful to read over the quickstart too.

Important Concepts

Feedback Function

Any function that can attach a Feedback object to a Report is technically a Feedback Function, and should be clearly marked as such.

Feedback Response should be Markdown, but should also provide a plain-text console-friendly version.

Recommended to have a muted boolean parameter that allows you to use it strictly as a Condition. When muted, a function still attaches feedback but that feedback will not contribute to correctness or consider being displayed to the user. Its score will be added, though!

Three perspectives:
Grader Developer: We need to be able to create feedback responses that are delivered clearly

to the autograder without being cumbersome.

Feedback Experimenter: We need to be able to customize these messages in a way that exposes all

the features.

Researcher: We aren’t trying to analyze Feedback through the source code. We want to be

able to generate metadata about any piece of Feedback included in the Report.

Tools should register all their known Feedback labels up front. Goal is to broadcast what the current feedback is. Ideally we’d also have a system for elegantly overriding that feedback’s wording.

Feedback Labels should have a standard naming schema; the other fields should also have some guidance on how they should be authored. In general, we attempt to follow Python variable naming rules (lowercase, underscores)

An “Atomic” Feedback Function is one that has exactly one possible label outcome. They should have their metadata moved to be static function attributes.

  • TEMPLATE_TEXT ((**)=>str): A function that can be used to generate the text string. All of the fields will be passed in as keyword arguments.

  • MESSAGE_TEXT ((**)=>str): They might also have MESSAGE_TEXT with the same concept.

  • JUSTIFICATION (str): A static justification

  • TITLE (str): A static student-friendly title

  • VERSION (str): A semvar string (e.g., ‘0.0.1’), should be paired with a docstring changelog.

A “Composite” Feedback Function has multiple possible label outcomes. - LABELS attribute could spell them all out?

Feedback in tools:

TIFA: Relatively centralized. Finite set. Desire for configurability, reuse of phrasings. Source: Mostly reporting syntax errors. Finite set. CAIT: No feedback functions, just feedback condition detectors. Assertions: Finite set. Desire for configurability, reuse of phrasings. Heavily procedurally developed. Questions: Finite set, but inherits from others? Sandbox: Runtime errors. Finite set, but also external? Strong desire for configurability. Toolkit: Could be Finite set. Often want to mute these and use them as conditions.

Core Commands

Imperative style commands for constructing feedback in a convenient way. Uses a global report object (MAIN_REPORT).

class Feedback(*args, label=None, category=None, justification=None, fields=None, field_names=None, kind=None, title=None, message=None, message_template=None, else_message=None, else_message_template=None, priority=None, valence=None, location=None, score=None, correct=None, muted=None, unscored=None, tool=None, version=None, author=None, tags=None, parent=None, report=<pedal.core.report.Report object>, delay_condition=False, activate=True, **kwargs)[source]

A class for storing raw feedback.

label

An internal name for this specific piece of feedback. The label should be an underscore-separated string following the same conventions as names in Python. They do not have to be globally unique, but labels should be as unique as possible (especially within a category).

Type:

str

tool

An internal name for indicating the tool that created this feedback. Should be taken directly from the Tool itself. If None, then this was not created by a tool but directly by the control script.

Type:

str, optional

category

A human-presentable name showable to the learner, indicating what sort of feedback this falls into (e.g., “runtime”, “syntax”, “algorithm”). More than one feedback will be in a category, but a feedback cannot be in more than one category.

Type:

str

kind

The pedagogical role of this feedback, e.g., “misconception”, “mistake”, “hint”, “constraint”. Usually, a piece of Feedback is pointing out a mistake, but feedback can also be used for various other purposes.

Type:

str

justification

An instructor-facing string briefly describing why this feedback was selected. Serves as a “TL;DR” for this feedback category, useful for debugging why a piece of feedback appeared.

Type:

str

justification_template

A markdown-formatted message template that will be used if a justification is None. Any fields will be injected into the template IF the condition is met.

Type:

str

priority

An indication of how important this feedback is relative to other types of feedback in the same category. Might be “high/medium/low”. Exactly how this gets used is up to the resolver, but typically it helps break ties.

Type:

str

valence

Indicates whether this is negative, positive, or neutral feedback. Either 1, -1, or 0.

Type:

int

title

A formal, student-facing title for this feedback. If None, indicates that the label should be used instead.

Type:

str, optional

message

A markdown-formatted message (aka also supporting HTML) that could be rendered to the user.

Type:

str

message_template

A markdown-formatted message template that will be used if a message is None. Any fields will be injected into the template IF the condition is met.

Type:

str

fields

The raw data that was used to interpolate the template to produce the message.

Type:

Dict[str,Any]

location

Information about specific locations relevant to this message.

Type:

Location or int

score

A numeric score to modify the students’ total score, indicating their overall performance. It is ultimately up to the Resolver to decide how to combine all the different scores; a typical strategy would be to add all the scores together for any non-muted feedback.

Type:

int

correct

Indicates that the entire submission should be considered correct (success) and that the task is now finished.

Type:

bool

muted

Whether this piece of feedback is something that should be shown to a student. There are various use cases for muted feedback: they can serve as flags for later conditionals, suppressed default kinds of feedback, or perhaps feedback that is interesting for analysis but not pedagogically helpful to give to the student. They will still contribute to overall score, but not to the correctness of the submission.

Type:

bool

unscored

Whether this piece of feedback contributes to the score/correctness.

Type:

bool

else_message

A string to render as a message when a NEGATIVE valence feedback is NOT triggered, or a POSITIVE valence feedback IS triggered.

Type:

str

else_message_template

Similar to the message_template, but for the else_message.

Type:

str

activate

Used for default feedback objects without a custom condition, to indicate whether they should be considered triggered. Defaults to True; setting this to False means that the feedback object will be deactivated. Note that most inheriting Feedback Functions will not respect this parameter.

Type:

bool

author

A list of names/emails that indicate who created this piece of feedback. They can be either names, emails, or combinations in the style of "Cory Bart <acbart@udel.edu>".

Type:

List[str]

version

A version string in the style of Semantic Version (semvar) such as "0.0.1". The last (third) digit should be incremented for small bug fixes/changes. The middle (second) digit should be used for more serious and intense changes. The first digit should be incremented when changes are made on exposure to learners or some other evidence-based motivation.

Type:

str

tags

Any tags that you want to attach to this feedback.

Type:

list[Tag]

parent

Information about what logical grouping within the submission this belongs to. Various tools can chunk up a submission (e.g., by section), they can use this field to keep track of how that decision was made. Resolvers can also use this information to organize feedback or to report multiple categories.

Type:

int, str, or pedal.core.feedback.Feedback

report

The Report object to attach this feedback to. Defaults to MAIN_REPORT. Unspecified fields will be filled in by inspecting the current Feedback Function context.

Type:

Report

CATEGORIES

alias of FeedbackCategory

KINDS

alias of FeedbackKind

condition(*args, **kwargs)[source]

Detect if this feedback is present in the code. Defaults to true through the activate parameter.

Returns:

Whether this feedback’s condition was detected.

Return type:

bool

update_location(location)[source]

Updates both the fields and location attribute. TODO: Handle less information intelligently.

clear_report(report=<pedal.core.report.Report object>)[source]

Removes all existing data from the report, including any submissions, suppressions, feedback, and Tool data.

Parameters:

report – The report to clear (defaults to the pedal.core.report.MAIN_REPORT).

class compliment(message=None, title=None, message_template=None, **kwargs)[source]

Create a positive feedback for the user, potentially on a specific line of code.

contextualize_report(submission, filename='answer.py', clear=True, report=<pedal.core.report.Report object>)[source]

Updates the report with the submission. By default, clears out any old information in the report. You can pass in either an actual Submission or a string representing the code of the submission.

Parameters:
  • submission (str or Submission) –

  • filename (str or None) – If the submission was not a Submission, then this will be used as the filename for the code given in submission.

  • clear (bool) – Whether or not to clear the report before attaching the submission.

  • report – The report to attach this feedback to (defaults to the MAIN_REPORT).

debug(*items, **kwargs)[source]

Attach logging information to the Report as a piece of feedback. Works at a higher priority than log() and does not attempt to convert to strings.

TODO: Consider updating to match log

Parameters:

items (Any) – Any set of values to log information about. Will be converted to strings using str if not already strings.

Returns:

class explain(message=None, message_template=None, **kwargs)[source]

Give a high-priority piece of negative feedback to the student.

feedback

Lowercase “function” version that works like other Core Feedback Functions.

class gently(message=None, message_template=None, **kwargs)[source]

Give a low-priority piece of negative feedback to the student.

Parameters:

message (str) – The feedback message to show to the student.

get_all_feedback(report=<pedal.core.report.Report object>)[source]

Gives access to the list of feedback from the report. Usually, you won’t need this; but if you want to build on the results of earlier tools, it can be a useful mechanism.

TODO: Provide mechanisms for conveniently searching feedback

Parameters:

report (Report) – The report to attach this feedback to (defaults to the MAIN_REPORT).

Returns:

A list of feedback

objects from the report.

Return type:

List[Feedback]

get_submission(report=<pedal.core.report.Report object>) Submission[source]

Get the current submission from the given report, or the default MAIN_REPORT.

Parameters:

report – The report to attach this feedback to (defaults to the MAIN_REPORT).

Returns:

The current submission

Return type:

Submission

class give_partial(value, **kwargs)[source]

Increases the user’s current score by the score.

class guidance(message=None, message_template=None, **kwargs)[source]

Give instructions about a question.

hide_correctness(report=<pedal.core.report.Report object>)[source]

Force the report to not indicate score/correctness.

Parameters:

report (pedal.core.report.Report) – The report object to hide correctness on.

log(*items, sep=' ', **kwargs)[source]

Attach logging information to the Report as a piece of feedback.

Parameters:
  • sep – The separator to use between items (defaults to space).

  • items (Any) – Any set of values to log information about. Will be converted to strings using str if not already strings.

Returns:

class set_correct(*args, label=None, category=None, justification=None, fields=None, field_names=None, kind=None, title=None, message=None, message_template=None, else_message=None, else_message_template=None, priority=None, valence=None, location=None, score=None, correct=None, muted=None, unscored=None, tool=None, version=None, author=None, tags=None, parent=None, report=<pedal.core.report.Report object>, delay_condition=False, activate=True, **kwargs)[source]

(Feedback Function)

Creates Successful feedback for the user, indicating that the entire assignment is done.

set_formatter(formatter, report=<pedal.core.report.Report object>)[source]

Set the formatter for the given report.

Parameters:
  • formatter (Formatter) – The formatter class to use. If you wish to use an instance instead, you’ll need to call set_formatter on the report instance instead.

  • report (Report) – The report to attach this feedback to (defaults to the MAIN_REPORT).

set_success

alias of set_correct

suppress(category=None, label=True, fields=None, report=<pedal.core.report.Report object>)[source]

Hides a given category or label within a category from being considered by the resolver.

Parameters:
  • category (str) – The general feedback category to suppress within. Should be a member of pedal.core.feedback_category.FeedbackCategory.

  • label (str or bool) – The specific feedback label to suppress, or True if all the labels within this category should be suppressed.

  • fields (dict) – The fields that will be exactly matched to suppress a given feedback. The keys should be strings.

  • report (Report) – The report object to suppress information within.

class system_error(*args, label=None, category=None, justification=None, fields=None, field_names=None, kind=None, title=None, message=None, message_template=None, else_message=None, else_message_template=None, priority=None, valence=None, location=None, score=None, correct=None, muted=None, unscored=None, tool=None, version=None, author=None, tags=None, parent=None, report=<pedal.core.report.Report object>, delay_condition=False, activate=True, **kwargs)[source]

Call this function to indicate that something has gone wrong at the system level with Pedal. Ideally, this doesn’t happen, but sometimes errors cascade and its polite for tools to suggest that they are not working correctly. These will not usually be reported to the student.

Report

File that holds the the Report class and the global MAIN_REPORT.

Note that you can make other Reports, but that doesn’t actually seem to be useful very often. Usually you want to just rely on the global MAIN_REPORT.

MAIN_REPORT = <pedal.core.report.Report object>

The global Report object. Meant to be used as a default singleton for any tool, so that instructors do not have to create their own Report. Of course, all APIs are expected to work with a given Report, and only default to this Report when no others are given. Ideally, the average instructor will never know this exists.

class Report[source]

A class for storing Feedback generated by Tools, along with any auxiliary data that the Tool might want to provide for other tools.

submission

The contextualized submission information.

Type:

Submission

feedback

The raw feedback generated for this Report so far.

Type:

list[Feedback]

suppressions

The categories and labels that have been suppressed so far.

Type:

list[tuple[str, str]]

hiddens

The parts of the final response that should be hidden. This can globally hide the ‘correct’, ‘score’, etc.

Type:

set[str]

group

The label for the current group. Feedback given by a Tool will automatically receive the current group. This is used by the Source tool, for example, in order to group feedback by sections and the pedal.assertions.commands.unit_test() function to combine results.

Type:

int or str

group_names

A printable, student-facing name for the group. When a group needs to be rendered out to the user, this will override whatever label was going to be presented instead.

Type:

dict[group, str]

hooks

A dictionary mapping events to a list of callable functions. Tools can register functions on hooks to have them executed when the event is triggered by another tool. For example, the Assertions tool has hooks on the Source tool to trigger assertion resolutions before advancing to next sections.

Type:

dict[str, list[callable]

_tool_data

Maps tool names to their data. The namespace for a tool can be used to store whatever they want, but will probably be in a dictionary itself.

Type:

dict[str, Any]

resolves

The result of having previously called a resolver. This allows you to check if a report has previously been resolved, or do something with that data.

Type:

list[Any]

result

The FinalFeedback (distinct from a Feedback) that was generated as a result of resolving this Report, or None if the Report is not yet resolved.

Type:

FinalFeedback

TOOLS = {'assertions': <pedal.core.tool.ToolRegistration object>, 'cait': <pedal.core.tool.ToolRegistration object>, 'sandbox': <pedal.core.tool.ToolRegistration object>, 'source': <pedal.core.tool.ToolRegistration object>, 'tifa': <pedal.core.tool.ToolRegistration object>}

The tools registered for this report, available via their names.

Type:

dict[str, dict]

classmethod add_class_hook(event, function)[source]

Similar to add_hook, except attaches them to the class, so they will be executed for ALL report subclasses.

add_feedback(feedback)[source]

Attaches the given feedback object to this report.

Parameters:

feedback (Feedback) – The feedback object to attach.

Returns:

The attached feedback.

Return type:

Feedback

add_hook(event, function)[source]

Register the function to be executed when the given event is triggered.

Parameters:
  • event (str) – An event name. Multiple functions can be triggered for the same event. The format is as follows: "namespace.function.extra" The ".extra" component is optional to add further nuance, but the general idea is that you are referring to functions that, when called, should trigger other functions to be called first. The namespace is typically a tool or module.

  • function (callable) – A callable function. This function should accept a keyword parameter named report; this report will be passed as as that argument.

add_ignored_feedback(feedback)[source]

Attaches the given feedback object to this report, but only in the ignored list. That means it should not be considered by the Resolver, since its condition did not apply to the code. Some Resolvers like to know about feedback that was not reached.

Parameters:

feedback (Feedback) – The feedback object to attach.

Returns:

The attached feedback.

Return type:

Feedback

clear()[source]

Resets the entire report back to its starting form, including deleting any attached submissions, tool data, and feedbacks. It will also reset any overridden fields of feedback classes. However, it will not affect class hooks.

contextualize(submission)[source]

Attach the given submission to this report.

Parameters:

submission (pedal.core.submission.Submission) – The submission to attach to this report.

execute_hooks(tool, event_name, arguments=None, keyword_arguments=None)[source]

Trigger the functions for all of the associated hooks. Hooks will be called with this report as a keyword report argument.

Parameters:
  • tool (str) – The name of the tool, to namespace events by.

  • event_name (str) – The event name (separate words with periods).

  • arguments (tuple[any]) – The arguments to be passed to the callback function.

  • keyword_arguments (dict[str, any]) – The keyword arguments to be passed to the callback funciton.

full_clear()[source]

This totally resets the report, including any class hooks.

hide_correctness()[source]

Suppress the RESULT category entirely, so that the report doesn’t indicate whether or not the submission was correct. TODO: Make this just a regular command.

classmethod register_tool(tool_name: str, reset_function)[source]

Identifies that the given Tool should be made available. :param tool_name: A unique string identifying this tool. :param reset_function: The function to call to reset the Tool.

Returns:

set_formatter(formatter)[source]

Update the formatter with the new option.

Parameters:

formatter (pedal.core.formatting.Formatter) – The new formatter to use.

stop_group(group)[source]
TODO: Should this prematurely end other groups? If so, do they get a

callback event to do any wrapup?

suppress(category=None, label=True, fields=None)[source]

Suggest that an entire category or label within a category ignored by the resolver. TODO: Currently, only global suppression is supported.

Parameters:
  • category (str) – The category of feedback to suppress.

  • label (bool or str) – A specific label to match against and suppress.

  • fields (dict of key/values) – The fields that will be matched exactly to suppress.

Location

Simple data class for storing information about a location within source code.

class Location(line, col=None, end_line=None, end_col=None, filename=None)[source]

A class for storing information about a location in source code.

line

A line of source code.

Type:

int

col

A column within a line of source code. If missing, then defaults to the entire line.

Type:

int, optional

end_line

The ending line of the source code region. Requires line.

Type:

int, optional

end_col

The ending column of the source code region. Requires col.

Type:

int, optional

filename

The filename that this location refers to. If missing, then defaults to the student’s submission’s main file.

Type:

str, optional

classmethod from_ast(node)[source]

Creates a new Location object from the AST node. Should work for both built-in AST nodes and CaitNodes.

Parameters:

node (Node) –

Returns:

Location

to_json()[source]

Creates a JSON version of this object, with all the fields.

Returns:

The JSON version of this location information.

Return type:

Dict[str,Any]

Feedback

Simple data classes for storing feedback to present to learners.

CompositeFeedbackFunction(*functions)[source]

Decorator for functions that return multiple types of feedback functions.

Parameters:

functions (callable) – A list of callable functions.

Returns:

The decorated function.

Return type:

callable

class Feedback(*args, label=None, category=None, justification=None, fields=None, field_names=None, kind=None, title=None, message=None, message_template=None, else_message=None, else_message_template=None, priority=None, valence=None, location=None, score=None, correct=None, muted=None, unscored=None, tool=None, version=None, author=None, tags=None, parent=None, report=<pedal.core.report.Report object>, delay_condition=False, activate=True, **kwargs)[source]

A class for storing raw feedback.

label

An internal name for this specific piece of feedback. The label should be an underscore-separated string following the same conventions as names in Python. They do not have to be globally unique, but labels should be as unique as possible (especially within a category).

Type:

str

tool

An internal name for indicating the tool that created this feedback. Should be taken directly from the Tool itself. If None, then this was not created by a tool but directly by the control script.

Type:

str, optional

category

A human-presentable name showable to the learner, indicating what sort of feedback this falls into (e.g., “runtime”, “syntax”, “algorithm”). More than one feedback will be in a category, but a feedback cannot be in more than one category.

Type:

str

kind

The pedagogical role of this feedback, e.g., “misconception”, “mistake”, “hint”, “constraint”. Usually, a piece of Feedback is pointing out a mistake, but feedback can also be used for various other purposes.

Type:

str

justification

An instructor-facing string briefly describing why this feedback was selected. Serves as a “TL;DR” for this feedback category, useful for debugging why a piece of feedback appeared.

Type:

str

justification_template

A markdown-formatted message template that will be used if a justification is None. Any fields will be injected into the template IF the condition is met.

Type:

str

priority

An indication of how important this feedback is relative to other types of feedback in the same category. Might be “high/medium/low”. Exactly how this gets used is up to the resolver, but typically it helps break ties.

Type:

str

valence

Indicates whether this is negative, positive, or neutral feedback. Either 1, -1, or 0.

Type:

int

title

A formal, student-facing title for this feedback. If None, indicates that the label should be used instead.

Type:

str, optional

message

A markdown-formatted message (aka also supporting HTML) that could be rendered to the user.

Type:

str

message_template

A markdown-formatted message template that will be used if a message is None. Any fields will be injected into the template IF the condition is met.

Type:

str

fields

The raw data that was used to interpolate the template to produce the message.

Type:

Dict[str,Any]

location

Information about specific locations relevant to this message.

Type:

Location or int

score

A numeric score to modify the students’ total score, indicating their overall performance. It is ultimately up to the Resolver to decide how to combine all the different scores; a typical strategy would be to add all the scores together for any non-muted feedback.

Type:

int

correct

Indicates that the entire submission should be considered correct (success) and that the task is now finished.

Type:

bool

muted

Whether this piece of feedback is something that should be shown to a student. There are various use cases for muted feedback: they can serve as flags for later conditionals, suppressed default kinds of feedback, or perhaps feedback that is interesting for analysis but not pedagogically helpful to give to the student. They will still contribute to overall score, but not to the correctness of the submission.

Type:

bool

unscored

Whether this piece of feedback contributes to the score/correctness.

Type:

bool

else_message

A string to render as a message when a NEGATIVE valence feedback is NOT triggered, or a POSITIVE valence feedback IS triggered.

Type:

str

else_message_template

Similar to the message_template, but for the else_message.

Type:

str

activate

Used for default feedback objects without a custom condition, to indicate whether they should be considered triggered. Defaults to True; setting this to False means that the feedback object will be deactivated. Note that most inheriting Feedback Functions will not respect this parameter.

Type:

bool

author

A list of names/emails that indicate who created this piece of feedback. They can be either names, emails, or combinations in the style of "Cory Bart <acbart@udel.edu>".

Type:

List[str]

version

A version string in the style of Semantic Version (semvar) such as "0.0.1". The last (third) digit should be incremented for small bug fixes/changes. The middle (second) digit should be used for more serious and intense changes. The first digit should be incremented when changes are made on exposure to learners or some other evidence-based motivation.

Type:

str

tags

Any tags that you want to attach to this feedback.

Type:

list[Tag]

parent

Information about what logical grouping within the submission this belongs to. Various tools can chunk up a submission (e.g., by section), they can use this field to keep track of how that decision was made. Resolvers can also use this information to organize feedback or to report multiple categories.

Type:

int, str, or pedal.core.feedback.Feedback

report

The Report object to attach this feedback to. Defaults to MAIN_REPORT. Unspecified fields will be filled in by inspecting the current Feedback Function context.

Type:

Report

CATEGORIES

alias of FeedbackCategory

KINDS

alias of FeedbackKind

condition(*args, **kwargs)[source]

Detect if this feedback is present in the code. Defaults to true through the activate parameter.

Returns:

Whether this feedback’s condition was detected.

Return type:

bool

update_location(location)[source]

Updates both the fields and location attribute. TODO: Handle less information intelligently.

class FeedbackCategory[source]

An Enumeration of feedback condition categories. These categories represent distinct types of feedback conditions based on their presence within the students’ submission. Notice that these explain the condition, not the feedback response (which would fall under Kind).

Some are contextualized to instruction (“mistakes”, “specification”, “instructor”) and some are generic (“syntax”, “runtime”, “algorithmic”). One category is also available for errors identified by the student.

ALGORITHMIC = 'algorithmic'

Errors that do not prevent functioning code but are generically wrong.

COMPLETE = 'complete'

A special category recognizing a completed submission

INSTRUCTIONS = 'instructions'

A category for feedback that are not actually errors, but is neutral.

INSTRUCTOR = 'instructor'

Errors marked by the instructor in a one-off fashion.

MISTAKES = 'mistakes'

Errors that do not prevent functioning code but are specifically wrong.

POSITIVE = 'positive'

A category for feedback that are not actually errors, but is positive information.

RUNTIME = 'runtime'

Execution errors triggered during runtime by an invalid Python operation

SPECIFICATION = 'specification'

Errors suggested because the code failed to meet specified behavior.

STUDENT = 'student'

Errors marked by the students’ own code, such as failing test cases.

STYLE = 'style'

Stylistic errors that do not prevent correct behavior but are otherwise undesirable

SYNTAX = 'syntax'

Grammatical and typographical errors that prevent parsing.

SYSTEM = 'system'

Errors caused by the Pedal grading infrastructure or the surrounding infrastructure.

UNKNOWN = 'uncategorized'

A category for unknown feedback. Ideally, never used.

class FeedbackKind[source]

An enumeration of the possible kinds of feedback responses, based on their pedagogical role. Valence can vary between specific instances of a kind of feedback, but some tend to have a specific valence.

MISCONCEPTION

A description of the misconception that is believed to be in the student’s mind, or perhaps the relevant concept from the material that should be associated with this. (“Variables must be initialized before they are used”).

Type:

str

MISTAKE

A description of the error or bug that the student has created (“NameError on line 5: sum has not been defined”).

Type:

str

HINT

A suggestion for what the student can do (“Initialize the sum variable on line 2”).

Type:

str

CONSTRAINT

A description of the task requirements or task type that the student has violated (“You used a for loop, but this question expected you to use recursion.”).

Type:

str

METACOGNITIVE

A suggestion for more regulative strategies (“You have been working for 5 hours, perhaps it is time to take a break?”).

Type:

str

class FeedbackResponse(*args, label=None, category=None, justification=None, fields=None, field_names=None, kind=None, title=None, message=None, message_template=None, else_message=None, else_message_template=None, priority=None, valence=None, location=None, score=None, correct=None, muted=None, unscored=None, tool=None, version=None, author=None, tags=None, parent=None, report=<pedal.core.report.Report object>, delay_condition=False, activate=True, **kwargs)[source]

An extension of Feedback that is meant to indicate that the class should not have any condition behind its response.

Categories and Kinds are special enumerations that classify the feedback conditions and responses, respectively.

class FeedbackCategory[source]

An Enumeration of feedback condition categories. These categories represent distinct types of feedback conditions based on their presence within the students’ submission. Notice that these explain the condition, not the feedback response (which would fall under Kind).

Some are contextualized to instruction (“mistakes”, “specification”, “instructor”) and some are generic (“syntax”, “runtime”, “algorithmic”). One category is also available for errors identified by the student.

ALGORITHMIC = 'algorithmic'

Errors that do not prevent functioning code but are generically wrong.

COMPLETE = 'complete'

A special category recognizing a completed submission

INSTRUCTIONS = 'instructions'

A category for feedback that are not actually errors, but is neutral.

INSTRUCTOR = 'instructor'

Errors marked by the instructor in a one-off fashion.

MISTAKES = 'mistakes'

Errors that do not prevent functioning code but are specifically wrong.

POSITIVE = 'positive'

A category for feedback that are not actually errors, but is positive information.

RUNTIME = 'runtime'

Execution errors triggered during runtime by an invalid Python operation

SPECIFICATION = 'specification'

Errors suggested because the code failed to meet specified behavior.

STUDENT = 'student'

Errors marked by the students’ own code, such as failing test cases.

STYLE = 'style'

Stylistic errors that do not prevent correct behavior but are otherwise undesirable

SYNTAX = 'syntax'

Grammatical and typographical errors that prevent parsing.

SYSTEM = 'system'

Errors caused by the Pedal grading infrastructure or the surrounding infrastructure.

UNKNOWN = 'uncategorized'

A category for unknown feedback. Ideally, never used.

class FeedbackKind[source]

An enumeration of the possible kinds of feedback responses, based on their pedagogical role. Valence can vary between specific instances of a kind of feedback, but some tend to have a specific valence.

MISCONCEPTION

A description of the misconception that is believed to be in the student’s mind, or perhaps the relevant concept from the material that should be associated with this. (“Variables must be initialized before they are used”).

Type:

str

MISTAKE

A description of the error or bug that the student has created (“NameError on line 5: sum has not been defined”).

Type:

str

HINT

A suggestion for what the student can do (“Initialize the sum variable on line 2”).

Type:

str

CONSTRAINT

A description of the task requirements or task type that the student has violated (“You used a for loop, but this question expected you to use recursion.”).

Type:

str

METACOGNITIVE

A suggestion for more regulative strategies (“You have been working for 5 hours, perhaps it is time to take a break?”).

Type:

str

class FeedbackStatus[source]

Enumeration of feedback status outcomes. When you create a piece of feedback, it will be either active or inactive depeneding on whether its condition was met. Alternatively, it is also possible that it triggered an exception. It may also be delayed, indicating that it has not yet been checked.

Environment

Environments are a collection of defaults, setups, and overrides that make Pedal adapt better to a given autograding platform (e.g., BlockPy, WebCAT, GradeScope). They are meant to streamline common configuration.

class Environment(files=None, main_file='answer.py', main_code=None, user=None, assignment=None, course=None, execution=None, instructor_file='instructor.py', report=<pedal.core.report.Report object>)[source]

Abstract Environment class, meant to be subclassed by the environment to help simplify configuration. Technically doesn’t need to do anything. Creating an instance of an environment will automatically clear out the existing contents of the report.

Parameters:
  • main_file (str) – The filename of the main file.

  • main_code (str) – The actual code of the main file.

  • files (dict[str, str]) – A list of filenames mapped to their contents.

Submission

Representation of a student’s submission to pedal. Almost certainly contains their code, but may also contain other metadata.

TODO: Normalize the concept of evaluations (“<stdin>” or “evaluations”). get_program(filename=’<stdin>’) => submission.files[‘<stdin>’] get_evaluation()

class Submission(files=None, main_file='answer.py', main_code=None, user=None, assignment=None, course=None, execution=None, instructor_file='instructor_tests.py', load_error=None)[source]

Simple class for holding information about the student’s submission.

Examples

A very simple example of creating a Submission with a single file:

>>> Submission({'answer.py': "a = 0"})
files

Dictionary of filenames mapped to their contents, emulating a file system.

Type:

dict mapping str to str

main_file

The entry point file that will be considered the main file.

Type:

str

main_code

The actual code to run; if None, then defaults to the code of the main file. Useful for tools that want to change the currently active code (e.g., Source’s sections) or run additional commands (e.g., Sandboxes’ call).

Type:

str

user

Additional information about the user.

Type:

dict

assignment

Additional information about the assignment.

Type:

dict

course

Additional information about the course.

Type:

dict

execution

Additional information about the results of executing the students’ code.

Type:

dict

get_files_lines()[source]

Retrieves a dictionary of lists of strings representing the files’ lines of code.

get_lines(filename=None)[source]

Retrieves the lines of code from this submission.

Returns:

The lines of code for this submission.

Return type:

list[str]

replace_main(code: str, file: str | None = None)[source]

Substitutes the current main code and filename with the given arguments. :param code: The new code to substitute in. :type code: str :param file: An optional filename to use. :type file: str

set_line_offset(lineno, filename=None)[source]

Sets the line offset for the given filename. Defaults to main file.

Tools

Tools are effectively submodules within Pedal - notable exceptions are pedal.core, pedal.environments, pedal.utilities, and pedal.resolvers.

All Tools with any kind of state are expected to have a reset function. Although this can take parameters, we recommend avoiding that. The reset should be for putting things back into a “null” state, and then you could have followup functions that also give initial state.

Tools should avoid removing their references (i.e., clear out data from lists and dictionaries instead of assigning a new one).

Tools should define a constants.py file with any useful constants. One of these should be TOOL_NAME, a string value indicating their desired namespace (e.g., TOOL_NAME = "source").

Tools should define a feedbacks.py module that centralizes Feedback Functions for that tool.

Any function that interacts with Reports should expose a report parameter that defaults to pedal.core.report.MAIN_REPORT. All internal functions should respect the report that was passed in, and not assume the MAIN_REPORT.

Tools are allowed to store state within their namespace of a pedal.core.report.Report. If the tool has not yet been initialized, its reset function will be called. You can update and access fields via dictionary access.

report[TOOL_NAME]['my field'] = False

The __init__.py file for a Tool should use __all__ to expose any interesting teacher-level functions, classes, and data. That way, teachers can just consistently use from pedal.tool import * to instantly gain access to the interesting set of members.

class ToolRegistration(name, reset)[source]

ToolRegistration is a data class for holding general, system-wide information about a Tool. Note that it doesn’t hold the tool’s data, just the static information about the tool like its reset function.

name

The formal name of this tool.

Type:

str

reset

A function defined for initializing and re-initializing the tool’s data.

Type:

callable