Extremely Serious

Month: February 2025

Understanding Gradle Task Lifecycle and Execution Phases

Gradle is a powerful build automation tool used in JVM-based projects like Java, Kotlin, and Groovy. Understanding its task lifecycle is essential for writing efficient and well-structured build scripts. This article explains the Gradle task lifecycle, execution phases, plugin tasks, and the role of doLast {} in defining main actions.


Gradle Task Lifecycle

A Gradle build goes through three main phases:

1. Initialization Phase

  • Identifies the projects involved in the build.
  • Creates Project objects but does not execute tasks.

2. Configuration Phase

  • Evaluates build.gradle or build.gradle.kts scripts.
  • Defines tasks and their dependencies, but does not execute them.

3. Execution Phase

  • Determines which tasks need to run based on dependencies.
  • Executes each task in the correct order.
  • Each task runs in three steps:
    1. doFirst {} (Pre-action, runs before the main task logic)
    2. Main task action (Defined within doLast {} if no type is set)
    3. doLast {} (Post-action, executes after the main task logic)

Execution Phase Example

Let's define some simple tasks and observe their execution order:

// Define taskA
task taskA {
    doFirst { println "Before taskA" }
    doLast { println "Main action of taskA" }
    doLast { println "After taskA" }
}

// Define taskB
task taskB {
    doFirst { println "Before taskB" }
    doLast { println "Main action of taskB" }
    doLast { println "After taskB" }
}

// Define taskC, which depends on taskA and taskB
task taskC {
    dependsOn taskA, taskB
    doLast { println "Main action of taskC" }
}

Expected Output when Running "gradle taskC"

> Task :taskA
Before taskA
Main action of taskA
After taskA

> Task :taskB
Before taskB
Main action of taskB
After taskB

> Task :taskC
Main action of taskC

Since taskC depends on taskA and taskB, Gradle ensures that taskA and taskB execute before taskC.


Common Main Task Actions

Gradle tasks can perform various actions, such as:

Compiling code (compileJava)

task compileCode {
    doLast { println "Compiling source code..." }
}

Copying files (Copy task)

task copyFiles(type: Copy) {
    from 'src/resources'
    into 'build/resources'
}

Running tests (test task)

task runTests {
    doLast { println "Running unit tests..." }
}

Creating a JAR file (Jar task)

task createJar(type: Jar) {
    archiveBaseName.set("myApp")
    destinationDirectory.set(file("$buildDir/libs"))
}

Running an application (JavaExec task)

task runApp(type: JavaExec) {
    mainClass = "com.example.Main"
    classpath = sourceSets.main.runtimeClasspath
}

✅ Cleaning build directories (clean task)

task cleanBuild {
    doLast {
        delete file("build")
        println "Build directory cleaned!"
    }
}

Are Plugin Tasks Part of the Main Task?

  • No, plugin tasks do not run automatically unless explicitly executed or added as dependencies.
  • Applying a plugin (e.g., java) provides tasks like compileJava, test, and jar, but they must be invoked or referenced.

Example:

apply plugin: 'java' // Adds Java-related tasks

task myBuildTask {
    dependsOn 'build' // Now includes plugin tasks
    doLast { println "Custom build complete!" }
}

Running gradle myBuildTask executes Java plugin tasks (compileJava, test, jar, etc.) before myBuildTask.


Do You Need doLast for the Main Task?

  • If a task has no type, the main action must be inside doLast {}.

    task myTask {
      doLast { println "Executing my task!" }
    }
  • If a task has a type, it already has built-in behavior, so doLast {} is only needed for additional actions.

    task copyFiles(type: Copy) {
      from 'src/resources'
      into 'build/resources'
    }

Avoid Running Actions Outside doLast

task badTask {
    println "This runs during the configuration phase!"
}

Problem: The message prints immediately during configuration, not when the task executes.

Solution: Use doLast {}.


Final Takeaways

✅ Gradle tasks go through Initialization → Configuration → Execution phases.

Tasks without a type need doLast {} for their main logic.

Plugin tasks are independent but can be linked via dependencies.

✅ Use built-in tasks (e.g., Copy, Jar, JavaExec) when possible.

✅ Always place executable logic inside doLast{} for tasks without predefined behavior.

By understanding these concepts, you can write efficient Gradle scripts that optimize build processes. 🚀

Prompt Engineering: Guiding AI for Optimal Results

Large Language Models (LLMs) are powerful tools, but their effectiveness hinges on how we interact with them. Prompt engineering, the art of crafting effective inputs, is crucial for unlocking the full potential of these models. Several key techniques can significantly improve the quality and relevance of LLM outputs. Let's explore some of these essential methods.

Zero-Shot Learning: Tapping into Existing Knowledge

Zero-shot learning leverages the LLM's pre-trained knowledge to perform tasks without specific examples. The prompt is designed to directly elicit the desired response.

  • Example: Classify the following text as either 'positive', 'negative', or 'neutral': 'The new restaurant was a complete disappointment. The food was bland, and the service was slow.' The expected output is "Negative." The model uses its understanding of language and sentiment to classify the text without prior examples of restaurant reviews.

Few-Shot Learning: Guiding with Examples

Few-shot learning provides the LLM with a handful of examples demonstrating the desired input-output relationship. These examples serve as a guide for the model to understand the task and generate appropriate responses.

  • Example:

    Text: "I just won the lottery!" Emotion: Surprise
    Text: "My cat ran away." Emotion: Sadness
    Text: "I got a promotion!" Emotion: Joy
    Text: "The traffic was terrible today." Emotion:

By providing a few examples, we teach the model to recognize patterns and apply them to new input, enabling it to infer the emotion expressed in the last text.

Instruction Prompting: Clear and Concise Directions

Instruction prompting focuses on providing explicit and precise instructions to the LLM. The prompt emphasizes the desired task and the expected format of the output, leaving no room for ambiguity.

  • Example: Write a short poem about the beauty of nature, using no more than 20 words. The model is instructed to create a poem, given the topic and length constraint, ensuring the output adheres to the specified requirements.

Chain-of-Thought Prompting: Encouraging Step-by-Step Reasoning

Chain-of-thought prompting encourages the LLM to explicitly articulate its reasoning process. The prompt guides the model to break down complex problems into smaller, manageable steps, leading to more accurate and transparent results.

  • Example:

    A pizza has 12 slices.
    
    Step 1: Calculate the total number of slices eaten.
    Step 2: Subtract the total slices eaten from the original number of slices.
    
    If Ron eat 2 slices and Ella 3 slices, how many slices left?

    The model should then output the solution along with the reasoning:

    Step 1: Calculate the total number of slices eaten.
    Ron eats 2 slices, and Ella eats 3 slices.
    
    Total slices eaten = 2 + 3 = 5
    
    Step 2: Subtract the total slices eaten from the original number of slices.
    
    Total slices left = 12 - 5 = 7
    
    Answer: 7 slices left.

Knowledge Augmentation: Providing Context and Information

Knowledge augmentation involves supplementing the prompt with external information or context that the LLM might not possess. This is particularly useful for specialized domains or when dealing with factual information.

  • Example: Using the following information: 'The highest mountain in the world is Mount Everest, located in the Himalayas,' answer the question: What is the highest mountain in the world? The provided context ensures the model can answer correctly, even if it doesn't have that fact memorized.

By mastering these prompt engineering techniques, we can effectively guide LLMs to generate more relevant, accurate, and creative outputs, unlocking their true potential and making them valuable tools for a wide range of applications.